Change search
Refine search result
1234567 1 - 50 of 390
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1. Adolfson, Malin
    et al.
    Laseen, Stefan
    Linde, Jesper
    Villani, Mattias
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Empirical properties of closed- and open-economy DSGE models of the Euro area2008In: Macroeconomic dynamics (Print), ISSN 1365-1005, E-ISSN 1469-8056, Vol. 12, 2-19 p.Article in journal (Refereed)
    Abstract [en]

    In this paper, we compare the empirical proper-ties of closed- and open-economy DSGE models estimated on Euro area data. The comparison is made along several dimensions; we examine the models in terms of their marginal likelihoods, forecasting performance, variance decompositions, and their transmission mechanisms of monetary policy.

  • 2. Adolfson, Malin
    et al.
    Linde, Jesper
    Villani, Mattias
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Forecasting performance of an open economy DSGE model2007In: Econometric Reviews, ISSN 0747-4938, E-ISSN 1532-4168, Vol. 26, no 04-feb, 289-328 p.Article in journal (Refereed)
    Abstract [en]

    This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1-2002Q4. We compare the DSGE model and a few variants of this model to various reduced form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and, two different Bayesian approaches, and traditional benchmark models, e.g., the random. walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in, an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between, rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.

  • 3. Ahmed, S. Ejaz
    et al.
    Fallahpour, Saber
    von Rosen, Dietrich
    von Rosen, Tatjana
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Estimation of Several Intraclass Correlation Coefficients2015In: Communications in statistics. Simulation and computation, ISSN 0361-0918, E-ISSN 1532-4141, Vol. 44, no 9, 2315-2328 p.Article in journal (Refereed)
    Abstract [en]

    An intraclass correlation coefficient observed in several populations is estimated. The basis is a variance-stabilizing transformation. It is shown that the intraclass correlation coefficient from any elliptical distribution should be transformed in the same way. Four estimators are compared. An estimator where the components in a vector consisting of the transformed intraclass correlation coefficients are estimated separately, an estimator based on a weighted average of these components, a pretest estimator where the equality of the components is tested and then the outcome of the test is used in the estimation procedure, and a James-Stein estimator which shrinks toward the mean.

  • 4.
    Andersson, Mikael
    Stockholm University, Faculty of Science, Department of Mathematics. Matematisk statistik.
    The asymptotic final size distribution of multitype chain-binomial epidemic processes.1999In: Advances in Applied Probability, ISSN 0001-8678, Vol. 31, no 1, 220-234 p.Article in journal (Refereed)
    Abstract [en]

    A multitype chain-binomial epidemic process is defined for a closed finite population by sampling a simple multidimensional counting process at certain points. The final size of the epidemic is then characterized, given the counting process, as the smallest root of a non-linear system of equations. By letting the population grow, this characterization is used, in combination with a branching process approximation and a weak convergence result for the counting process, to derive the asymptotic distribution of the final size. This is done for processes with an irreducible contact structure both when the initial infection increases at the same rate as the population and when it stays fixed.

  • 5.
    Andersson, Mikael
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics. Matematisk statistik.
    Ekdahl, Karl
    Mölstad, Sigvard
    Persson, Kristina
    Hansson, Hans Bertil
    Giesecke, Johan
    Modelling the spread of penicillin-resistant Streptococcus pneumoniae in day-care and evaluation of intervention.2005In: Statistics in Medicine, ISSN 0277-6715, Vol. 24, no 23, 3593-607 p.Article in journal (Refereed)
    Abstract [en]

    In 1995, a disease control and intervention project was initiated in Malmöhus county in southern Sweden to limit the spread of penicillin-resistant pneumococci. Since most of the carriers of pneumococci are preschool children, and since most of the spread is believed to take place in day-care, a mathematical model, in the form of a stochastic process, for the spread in a day-care group was constructed. Effects of seasonal variation and size of the day-care group were particularly considered. The model was then used for comparing results from computer simulations without and with intervention. Results indicate that intervention is highly effective in day-care groups with more than ten children during the second half of the year.

  • 6.
    Andersson, Patrik
    Stockholm University, Faculty of Science, Department of Mathematics.
    Card counting in continuous timeManuscript (preprint) (Other academic)
  • 7.
    Andersson, Patrik
    Stockholm University, Faculty of Science, Department of Mathematics.
    Credit default model for a dynamically changing economyArticle in journal (Refereed)
  • 8.
    Andersson, Patrik
    Stockholm University, Faculty of Science, Department of Mathematics.
    Credit default model for a dynamically changing economy2008Report (Other academic)
    Abstract [en]

    We propose a model describing an economy where companies may default due to contagion. By using standard approximation results for stochastic process we are able to describe the features of the model. It turns out that the model reproduces the oscillations in the default rates that has been observed empirically. That is, we have an intrinsic oscillation in the economic system without applying any external macroeconomic force. These oscillations can be understood as cleansing of the unhealthy companies during a recession and the recession ending when sufficiently many of the unhealthy companies have left the economy. This is important both from a risk management perspective as well as from a policy perspective since it shows that contagious defaults may help to explain the oscillations of business cycles. We also investigate the first-passage times of the default process, using this as a proxy for the time to a recession.

  • 9.
    Andersson, Patrik
    Stockholm University, Faculty of Science, Department of Mathematics.
    Four applications of stochastic processes: Contagious disease, credit risk, gambling and bond portfolios2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of four papers on applications of stochastic processes.

    In Paper I we study an open population SIS (Susceptible - Infective - Susceptible) stochastic epidemic model from the time of introduction of the disease, through a possible outbreak and to extinction. The analysis uses coupling arguments and diffusion approximations.

    In Paper II we propose a model describing an economy where companies may default due to contagion. The features of the model are analyzed using diffusion approximations. We show that the model can reproduce oscillations in the default rates similar to what has been observed empirically.

    In Paper III we consider the problem of finding an optimal betting strategy for a house-banked casino card game that is played for several coups before reshuffling. A limit result for the return process is found and the optimal card counting strategy is derived. This continuous time strategy is shown to be a natural generalization of the discrete time strategy where the so called effects of removals are replaced by the infinitesimal generator of the card process.

    In Paper IV we study interest rate models where the term structure is given by an affine relation and in particular where the driving stochastic processes are so-called generalised Ornstein-Uhlenbeck processes. We show that the return and variance of a portfolio of bonds which are continuously rolled over, also called rolling horizon bonds, can be expressed using the cumulant generating functions of the background driving Lévy processes associated with the OU processes. We also show that if the short rate, in a risk-neutral setting, is given by a linear combination of generalised OU processes, the implied term structure can be expressed in terms of the cumulant generating functions.

  • 10.
    Andersson, Patrik
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Lindenstrand, David
    Stockholm University, Faculty of Science, Department of Mathematics.
    A stochastic SIS epidemic with demography: initial stages and time to extinction2011In: Journal of Mathematical Biology, ISSN 0303-6812, E-ISSN 1432-1416, Vol. 62, no 3, 333-348 p.Article in journal (Refereed)
    Abstract [en]

    We study an open population stochastic epidemic model from the time of introduction of the disease, through a possible outbreak and to extinction. The model describes an SIS (susceptible–infective–susceptible) epidemic where all individuals, including infectious ones, reproduce at a given rate. An approximate expression for the outbreak probability is derived using a coupling argument. Further, we analyse the behaviour of the model close to quasi-stationarity, and the time to disease extinction, with the aid of a diffusion approximation. In this situation the number of susceptibles and infectives behaves as an Ornstein–Uhlenbeck process, centred around the stationary point, for an exponentially distributed time before going extinct.

  • 11.
    Andersson, Patrik
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Nordvall Lagerås, Andreas
    Optimal bond portfolios with fixed time to maturityArticle in journal (Refereed)
    Abstract [en]

    We study interest rate models where the term structure is given by an affine relation and in particular where the driving stochastic processes are so-called generalised Ornstein-Uhlenbeck processes.

    For many institutional investors it is natural to consider investment in bonds where the time to maturity of the bonds in the portfolio is kept fixed over time. We show that the return and variance of such a portfolio of bonds which are continuously rolled over, also called rolling horizon bonds, can be expressed using the cumulant generating functions of the background driving L´evy processes associated with the OU processes. This allows us to calculate the efficient mean-variance portfolio. We exemplify the results by a case study on U.S. Treasury bonds.

    We also show that if the short rate, in a risk-neutral setting, is given by a linear combination of generalised OU processes, the implied term structure can be expressed in terms of the cumulant generating functions. This makes it possible to quite easily see what kind of term structures can be generated with a particular short rate dynamics.

  • 12.
    Andersson, Per Gösta
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    A Classroom Approach to Illustrate Transformation and Bootstrap Confidence Interval Techniques Using the Poisson Distribution2017In: International Journal of Statistics and Probability, ISSN 1927-7032, Vol. 6, no 2, 42-53 p.Article in journal (Refereed)
    Abstract [en]

    The Poisson distribution is here used to illustrate transformation and bootstrap techniques in order to construct a confidence interval for a mean. A comparison is made between the derived intervals and the Wald  and score confidence intervals. The discussion takes place in a classroom, where the teacher and the students have previously discussed and evaluated the Wald and score confidence intervals. While step by step  interactively getting acquainted  with new techniques,  the students will learn about the effects of e.g. bias and asymmetry and ways of dealing with such phenomena. The primary purpose of this teacher-student communication is therefore not to find the  best possible interval estimator for this particular case, but rather to provide a study displaying a teacher and her/his students interacting with each other in an efficient and rewarding way. The teacher has a strategy of encouraging the students to take initiatives. This is accomplished by providing the necessary background of the problem and some underlying theory after which the students are confronted with questions and problem solving. From this the learning process starts. The teacher has to be flexible according to how the students react.  The students are supposed to have studied mathematical statistics for at least two semesters. 

  • 13.
    Andersson, Per Gösta
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    A Classroom Approach to the Construction of an Approximate Confidence Interval of a Poisson Mean Using One Observation2015In: American Statistician, ISSN 0003-1305, E-ISSN 1537-2731, Vol. 69, no 3, 160-164 p.Article in journal (Refereed)
    Abstract [en]

    Even elementary statistical problems may give rise to a deeper and broader discussion of issues in probability and statistics. The construction of an approximate confidence interval for a Poisson mean turns out to be such a case. The simple standard two-sided Wald confidence interval by normal approximation is discussed and compared with the score interval. The discussion is partly in the form of an imaginary dialog between a teacher and a student, where the latter is supposed to have studied mathematical statistics for at least one semester.

  • 14.
    Andersson, Per Gösta
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Central limit theorems from a teaching perspective2015In: Festschrift in Honor of Hans Nyquist on the Occasion of his 65th Birthday / [ed] Ellinor Fackle-Fornius, Stockholm: Stockholm University, 2015, 1-6 p.Chapter in book (Other academic)
    Abstract [en]

    Central limit theorems and their applications constitute highlights in probability theory and statistical inference. However, as a teacher, especially in undergraduate courses, you are faced with the challenges of how to introduce the results. These challenges especially concern ways of presentation and discussion of under which conditions asymptotic (approximate) results hold. This paper attempts to present some relevant examples for possible use in the classroom.

  • 15.
    Andersson, Per Gösta
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Särndal, Carl-Erik
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Calibration for nonresponse treatment: In one or two steps?2016In: Statistical Journal of the IAOS, ISSN 1874-7655, E-ISSN 1875-9254, Vol. 32, no 3, 375-381 p.Article in journal (Refereed)
    Abstract [en]

    This paper explores the different ways in which auxiliary information can be put to use in calibrated weighting adjustment under survey nonresponse.  Information is often present at two levels, the population level and the sample level. The many options available in executing the calibration derive from several factors: One is the order in which the two sources of information enters into calibration, a choice of a bottom-up as opposed to a top-down approach. Another is whether the calibration should be carried out sequentially in two steps, or in one single step with the combined information. A third question is whether one can simplify the procedure, at no major loss of accuracy, by transcribing individual population auxiliary data from the register to the sample units only. We make a systematic list of the possibilities arising for calibration adjustment in this setting. An empirical study concludes the paper.

  • 16.
    Andersson, Per Gösta
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Särndal, Carl-Erik
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Calibration for nonresponse treatment using auxiliary information at different levels2016In: ICES-V proceedings, 2016Conference paper (Other academic)
  • 17.
    Andreev, Andriy
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Morlanes, José Igor
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Simulations-based Study of Covariance Structure for Fractional Ornstein-Uhlenbeck process of the Second KindManuscript (preprint) (Other academic)
  • 18.
    Axelson, Martin
    et al.
    Statistiska centralbyrån, Statistics Sweden.
    Carlson, Michael
    Stockholm University, Faculty of Social Sciences, Department of Statistics. Statistiska centralbyrån, Statistics Sweden.
    Mirza, Hassan
    Statistiska centralbyrån, Statistics Sweden.
    Andersson, Karin
    Statistiska centralbyrån, Statistics Sweden.
    Alternativa datainsamlingsmetoder i ULF, fas 2: En jämförelse mellan två olika datainsamlingsmetoder2010Report (Other academic)
    Abstract [sv]

    I föreliggande rapport redovisas resultaten från den andra, och avslutande, fasen av den metodstudie som genomförts inom ramen för projektet Alternativa datainsamlingsmetoder för Undersökning av Levnadsförhållanden (ULF), som påbörjades 2002.

    Metodstudiens huvudsakliga syfte var att jämföra två olika metoder för datainsamling: en mixed mode ansats (MM) med en kombination av insamlingsmetoderna besök- och telefonintervju utan datorstöd respektive datorstödd telefonintervju (CATI). Jämförelser avseende huvudsakligen fyra olika kvalitetsaspekter redovisas: (1) mätkvaliteten, (2) bortfallsfelets storlek och inverkan på skattningar, (3) svarsandelen i Barn-ULF och (4) viljan hos uppgiftslämnarna för att delta i undersökningen.

    Den allmänna slutsatsen från studien är att den systematiska felkomponenten (mät- och bortfallsfel) i tillförlitlighetskomponenten bedöms vara oförändrad vid en övergång till CATI. I kombination med att övergången skulle frigöra resurser för en urvalsökning, innebär detta att medelkvadratfelet (MSE) i skattningarna skulle minska vid en övergång från den tidigare ansatsen med MM till CATI som primär insamlingsmetod.

  • 19. Ball, Frank
    et al.
    Pellis, Lorenzo
    Trapman, Pieter
    Stockholm University, Faculty of Science, Department of Mathematics.
    Reproduction numbers for epidemic models with households and other social structures II: Comparisons and implications for vaccination2016In: Mathematical Biosciences, ISSN 0025-5564, E-ISSN 1879-3134, Vol. 274, 108-139 p.Article in journal (Refereed)
    Abstract [en]

    In this paper we consider epidemic models of directly transmissible SIR (susceptible -> infective -> recovered) and SEIR (with an additional latent class) infections in fully-susceptible populations with a social structure, consisting either of households or of households and workplaces. We review most reproduction numbers defined in the literature for these models, including the basic reproduction number R-0 introduced in the companion paper of this, for which we provide a simpler, more elegant derivation. Extending previous work, we provide a complete overview of the inequalities among these reproduction numbers and resolve some open questions. Special focus is put on the exponential-growth-associated reproduction number R-r, which is loosely defined as the estimate of R-0 based on the observed exponential growth of an emerging epidemic obtained when the social structure is ignored. We show that for the vast majority of the models considered in the literature R-r >= R-0 when R-0 >= 1 and R-r <= R-0 when R-0 <= 1. We show that, in contrast to models without social structure, vaccination of a fraction 1 - 1/R-0 of the population, chosen uniformly at random, with a perfect vaccine is usually insufficient to prevent large epidemics. In addition, we provide significantly sharper bounds than the existing ones for bracketing the critical vaccination coverage between two analytically tractable quantities, which we illustrate by means of extensive numerical examples.

  • 20. Baresel, Christian
    et al.
    Destouni, Georgia
    Stockholm University, Faculty of Science, Department of Physical Geography and Quaternary Geology.
    Uncertainty-Accounting Environmental Policy and Management of Water Systems2007In: Environmental Science & Technology, Vol. 41, no 10, 3653–3659- p.Article in journal (Refereed)
    Abstract [en]

    Environmental policies for water quality and ecosystem

    management do not commonly require explicit stochastic

    accounts of uncertainty and risk associated with the

    quantification and prediction of waterborne pollutant loads

    and abatement effects. In this study, we formulate and

    investigate a possible environmental policy that does require

    an explicit stochastic uncertainty account. We compare

    both the environmental and economic resource allocation

    performance of such an uncertainty-accounting environmental

    policy with that of deterministic, risk-prone and riskaverse

    environmental policies under a range of different

    hypothetical, yet still possible, scenarios. The comparison

    indicates that a stochastic uncertainty-accounting

    policy may perform better than deterministic policies over

    a range of different scenarios. Even in the absence of

    reliable site-specific data, reported literature values appear

    to be useful for such a stochastic account of uncertainty.

  • 21.
    Björkström, Anders
    Stockholm University, Faculty of Science, Department of Mathematics.
    Regression methods in multidimensional prediction and estimation2007Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In regression with near collinear explanatory variables, the least squares predictor has large variance. Ordinary least squares regression (OLSR) often leads to unrealistic regression coefficients. Several regularized regression methods have been proposed as alternatives. Well-known are principal components regression (PCR), ridge regression (RR) and continuum regression (CR). The latter two involve a continuous metaparameter, offering additional flexibility.

    For a univariate response variable, CR incorporates OLSR, PLSR, and PCR as special cases, for special values of the metaparameter. CR is also closely related to RR. However, CR can in fact yield regressors that vary discontinuously with the metaparameter. Thus, the relation between CR and RR is not always one-to-one. We develop a new class of regression methods, LSRR, essentially the same as CR, but without discontinuities, and prove that any optimization principle will yield a regressor proportional to a RR, provided only that the principle implies maximizing some function of the regressor's sample correlation coefficient and its sample variance. For a multivariate response vector we demonstrate that a number of well-established regression methods are related, in that they are special cases of basically one general procedure. We try a more general method based on this procedure, with two meta-parameters. In a simulation study we compare this method to ridge regression, multivariate PLSR and repeated univariate PLSR. For most types of data studied, all methods do approximately equally well. There are cases where RR and LSRR yield larger errors than the other methods, and we conclude that one-factor methods are not adequate for situations where more than one latent variable are needed to describe the data. Among those based on latent variables, none of the methods tried is superior to the others in any obvious way.

  • 22.
    Björkwall, Susanna
    Stockholm University, Faculty of Science, Department of Mathematics.
    Stochastic claims reserving in non-life insurance: Bootstrap and smoothing models2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In practice there is a long tradition of actuaries calculating reserve estimates according to deterministic methods without explicit reference to a stochastic model. For instance, the chain-ladder was originally a deterministic reserving method. Moreover, the actuaries often make ad hoc adjustments of the methods, for example, smoothing of the chain-ladder development factors, in order to fit the data set under analysis.

    However, stochastic models are needed in order to assess the variability of the claims reserve. The standard statistical approach would be to first specify a model, then find an estimate of the outstanding claims under that model, typically by maximum likelihood, and finally the model could be used to find the precision of the estimate. As a compromise between this approach and the actuary's way of working without reference to a model the object of the research area has often been to first construct a model and a method that produces the actuary's estimate and then use this model in order to assess the uncertainty of the estimate. A drawback of this approach is that the suggested models have been constructed to give a measure of the precision of the reserve estimate without the possibility of changing the estimate itself.

    The starting point of this thesis is the inconsistency between the deterministic approaches used in practice and the stochastic ones suggested in the literature. On one hand, the purpose of Paper I is to develop a bootstrap technique which easily enables the actuary to use other development factor methods than the pure chain-ladder relying on as few model assumptions as possible. This bootstrap technique is then extended and applied to the separation method in Paper II. On the other hand, the purpose of Paper III is to create a stochastic framework which imitates the ad hoc deterministic smoothing of chain-ladder development factors which is frequently used in practice.

  • 23.
    Bodnar, Taras
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Dette, Holger
    Parolya, Nestor
    Spectral analysis of the Moore-Penrose inverse of a large dimensional sample covariance matrix2016In: Journal of Multivariate Analysis, ISSN 0047-259X, E-ISSN 1095-7243, Vol. 148, 160-172 p.Article in journal (Refereed)
    Abstract [en]

    For a sample of $n$ independent identically distributed $p$-dimensional centered random vectorswith covariance matrix $\bSigma_n$ let $\tilde{\bS}_n$ denote the usual sample covariance(centered by the mean) and $\bS_n$ the non-centered sample covariance matrix (i.e. the matrix of second moment estimates), where $p> n$. In this paper, we provide the limiting spectral distribution andcentral limit theorem for linear spectralstatistics of the Moore-Penrose inverse of $\bS_n$ and $\tilde{\bS}_n$. We consider the large dimensional asymptotics when the number of variables $p\rightarrow\infty$ and the sample size $n\rightarrow\infty$ such that $p/n\rightarrow c\in (1, +\infty)$. We present a Marchenko-Pastur law for both types of matrices, which shows that the limiting spectral distributions for both sample covariance matrices are the same. On the other hand, we demonstrate that the asymptotic distribution of linear spectral statistics of the Moore-Penrose inverse of $\tilde{\bS}_n$ differs in the mean from that of $\bS_n$.

  • 24.
    Bodnar, Taras
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Dickhaus, Thorsten
    On the Simes inequality in elliptical models2017In: Annals of the Institute of Statistical Mathematics, ISSN 0020-3157, E-ISSN 1572-9052, Vol. 69, no 1, 215-230 p.Article in journal (Refereed)
    Abstract [en]

    We provide some necessary and some sufficient conditions for the validity of the inequality ofSimes in models with elliptical dependencies. Necessary conditions are presented in terms of sufficient conditions for the reverse Simes inequality. One application of our main results concerns the problem of model misspecification, in particular the case that the assumption of Gaussianity of test statistics is violated. Since our sufficient conditions require non-negativity of correlation coefficients between test statistics, we also develop two exact tests for vectors of correlation coefficients and compare their powers in computer simulations.

  • 25.
    Bodnar, Taras
    et al.
    European University Viadrina, Germany.
    Gupta, Arjun K.
    Bowling Green State University, USA.
    Robustness of the Inference Procedures for the Global Minimum Variance Portfolio Weights in a Skew Normal Model2015In: European Journal of Finance, ISSN 1351-847X, E-ISSN 1466-4364, Vol. 21, no 13-14, 1176-1194 p.Article in journal (Refereed)
    Abstract [en]

    In this paper, we study the influence of skewness on the distributional properties of the estimated weightsof optimal portfolios and on the corresponding inference procedures derived for the optimal portfolioweights assuming that the asset returns are normally distributed. It is shown that even a simple form ofskewness in the asset returns can dramatically influence the performance of the test on the structure of theglobal minimum variance portfolio. The results obtained can be applied in the small sample case as well.Moreover, we introduce an estimation procedure for the parameters of the skew-normal distribution that isbased on the modified method of moments.A goodness-of-fit test for the matrix variate closed skew-normaldistribution has also been derived. In the empirical study, we apply our results to real data of several stocksincluded in the Dow Jones index.

  • 26.
    Bodnar, Taras
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Gupta, Arjun K.
    Parolya, Nestor
    Direct shrinkage estimation of large dimensional precision matrix2016In: Journal of Multivariate Analysis, ISSN 0047-259X, E-ISSN 1095-7243, Vol. 146, 223-236 p.Article in journal (Refereed)
    Abstract [en]

    In this work we construct an optimal shrinkage estimator for the precision matrix in high dimensions. We consider the general asymptotics when the number of variables p -> infinity and the sample size n -> infinity so that p/n -> c is an element of (0, +infinity). The precision matrix is estimated directly, without inverting the corresponding estimator for the covariance matrix. The recent results from random matrix theory allow us to find the asymptotic deterministic equivalents of the optimal shrinkage intensities and estimate them consistently. The resulting distribution-free estimator has almost surely the minimum Frobenius loss. Additionally, we prove that the Frobenius norms of the inverse and of the pseudo-inverse sample covariance matrices tend almost surely to deterministic quantities and estimate them consistently. Using this result, we construct a bona fide optimal linear shrinkage estimator for the precision matrix in case c < 1. At the end, a simulation is provided where the suggested estimator is compared with the estimators proposed in the literature. The optimal shrinkage estimator shows significant improvement even for non-normally distributed data.

  • 27.
    Bodnar, Taras
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Hautsch, Nikolaus
    Dynamic Conditional Correlation Multiplicative Error Processes2016In: Journal of Empirical Finance, ISSN 0927-5398, E-ISSN 1879-1727, Vol. 36, 41-67 p.Article in journal (Refereed)
    Abstract [en]

    We introduce a dynamic model for multivariate processes of (non-negative) high-frequency tradingvariables revealing time-varying conditional variances and correlations. Modeling the variables' conditional mean processes using a multiplicative error model, we map the resulting residuals into aGaussian domain using a copula-type transformation. Based on high-frequency volatility, cumulativetrading volumes, trade counts and market depth of various stocks traded at the NYSE, we show thatthe proposed transformation is supported by the data and allows capturing (multivariate) dynamicsin higher order moments. The latter are modeled using a DCC-GARCH specification. We suggest estimating the model by composite maximum likelihood which is sufficientlyflexible to be applicablein high dimensions. Strong empirical evidence for time-varying conditional (co-)variances in tradingprocesses supports the usefulness of the approach. Taking these higher-order dynamics explicitlyinto account significantly improves the goodness-of-fit and out-of-sample forecasts of the multiplicative error model.

  • 28.
    Bodnar, Taras
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Mazur, Stepan
    Podgorski, Krzysztof
    Singular inverse Wishart distribution and its application to portfolio theory2016In: Journal of Multivariate Analysis, ISSN 0047-259X, E-ISSN 1095-7243, Vol. 143, 314-326 p.Article in journal (Refereed)
    Abstract [en]

    The inverse of the standard estimate of covariance matrix is frequently used in the portfolio theory to estimate the optimal portfolio weights. For this problem, the distribution of the linear transformation of the inverse is needed. We obtain this distribution in the case when the sample size is smaller than the dimension, the underlying covariance matrix is singular, and the vectors of returns are independent and normally distributed. For the result, the distribution of the inverse of covariance estimate is needed and it is derived and referred to as the singular inverse Wishart distribution. We use these results to provide an explicit stochastic representation of an estimate of the mean-variance portfolio weights as well as to derive its characteristic function and the moments of higher order. The results are illustrated using actual stock returns and a discussion of practical relevance of the model is presented.

  • 29.
    Bodnar, Taras
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Parolya, Nestor
    Schmid, Wolfgang
    The Exact Solution of Multi-period Portfolio Choice Problem with Exponential Utility2016In: Operations Research Proceedings 2014: Selected Papers of the Annual International Conference of the German Operations Research Society (GOR), RWTH Aachen University, Germany, September 2-5, 2014 / [ed] Marco Lübbecke, Arie Koster, Peter Letmathe, Reinhard Madlener, Britta Peis, Grit Walther, Springer, 2016, 45-51 p.Chapter in book (Refereed)
    Abstract [en]

    In the current paper we derive the exact analytical solution of the multiperiod portfolio choice problem for an exponential utility function. It is assumed that the asset returns depend on predictable variables and that the joint random process of the asset returns follows a vector autoregression. We prove that the optimal portfolio weights depend on the covariance matrices of the next two periods and the conditional mean vector of the next period. The case without predictable variables and the case of independent asset returns are partial cases of our solution.

  • 30.
    Bodnar, Taras
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Zabolotskyy, Taras
    How risky is the optimal portfolio which maximizes the Sharpe ratio?2017In: AStA Advances in Statistical Analysis, ISSN 1863-8171, E-ISSN 1863-818X, Vol. 101, no 1, 1-28 p.Article in journal (Refereed)
    Abstract [en]

    In this paper, we investigate the properties of the optimal portfolio in the sense of maximizing the Sharpe ratio (SR) and develop a procedure for the calculation of the risk of this portfolio. This is achieved by constructing an optimal portfolio which minimizes the Value-at-Risk (VaR) and at the same time coincides with the tangent (market) portfolio on the efficient frontier which is related to the SR portfolio. The resulting significance level of the minimum VaR portfolio is then used to determine the risk of both the market portfolio and the corresponding SR portfolio. However, the expression of this significance level depends on the unknown parameters which have to be estimated in practice. It leads to an estimator of the significance level whose distributional properties are investigated in detail. Based on these results, a confidence interval for the suggested risk measure of the SR portfolio is constructed and applied to real data. Both theoretical and empirical findings document that the SR portfolio is very risky since the corresponding significance level is smaller than 90 % in most of the considered cases.

  • 31.
    Bojarova, Jelena
    Stockholm University, Faculty of Science, Department of Mathematics.
    Toward Sequential Data Assimilation for NWP Models Using Kalman Filter Tools2010Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The aim of the meteorological data assimilation is to provide an initial field for Numerical Weather Prediction (NWP) and to sequentially update the knowledge about it using available observations. Kalman filtering is a robust technique for the sequential estimation of the unobservable model state based on the linear regression concept. In the iterative use together with Kalman smoothing, it can easily be extended to work powerfully in the non-Gaussian and/or  non-linear framework. The huge dimensionality of the model state variable for high resolution NWP models (magnitude 108) makes it impossible with any explicit manipulations of the forecast error covariance matrix required for Kalman filter and Kalman smoother recursions. For NWP models the technical implementation of a Kalman filtering becomes the main challenge which provokes developments of novel data assimilation algorithms.

    This thesis is concerned with extensions of the Kalman filtering when the assumptions on linearity and Gaussianity of the state space model are violated. The research includes both theoretical studies of the properties of such extensions, within the framework of idealized small-dimensional models, and the development of the data assimilation algorithms for a full scale limited area high resolution NWP forecasting system.

    This thesis shows that non-Gaussian state space models can efficiently be approximated by a Gaussian state space model with an adaptively estimated variance of the stochastic forcing. That results in a type of local smoothing, in contrast to the global smoothing provided by Gaussian state space models. With regards to NWP models, the thesis shows that the sequential update of the uncertainty about the model state estimate is essential for efficient extraction of information from observations. The Ensemble Kalman filters can be used to represent both flow- and observation-network-dependent structures of the forecast error covariance matrix, in spite of a severe rank-deficiency of the Ensemble Kalman filters. As a culmination of this research the hybrid variational data assimilation has been developed on top of the HIRLAM variational data assimilation system. It provides the possibility of utilizing, during the data assimilation process, the error-of-the-day structure of the forecast error covariance, estimated from the ensemble of perturbations, at the same time as the full rank of the variational data assimilation is preserved.

  • 32.
    Bojarova, Jelena
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Gustafsson, Nils
    SMHI.
    Aspects of non-linearities for data assimilation by Kalman filtering in a shallow water modelManuscript (preprint) (Other academic)
  • 33.
    Bojarova, Jelena
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Gustafsson, Nils
    SMHI.
    Johansson, Åke
    Vignes, Ole
    The EKTF rescaling scheme in HIRLAM2011In: Tellus. Series A, Dynamic meteorology and oceanography, ISSN 0280-6495, E-ISSN 1600-0870, Vol. 63, no 3, 685-401 p.Article in journal (Other academic)
    Abstract [en]

    The ETKF rescaling scheme has been implemented into the HIRLAM forecasting system in order to estimate the uncertainty of the model state. The main purpose is to utilize this uncertainty information for modelling of flow-dependent background error covariances within the framework of a hybrid variational ensemble data assimilation scheme. The effects of rank-deficiency in the ETKF formulation is explained and the need for variance inflation as a way to compensate for these effects is justified. A filter spin-up algorithm is proposed as a refinement of the variance inflation. The proposed spin-up algorithm will also act to prevent ensemble collapse since the ensemble will receive ‘fresh blood’ in the form of additional perturbation components, generated on the basis of a static background error covariance matrix. The resulting ETKF-based ensemble perturbations are compared with ensemble perturbations based on targeted singular vectors and are shown to have more realistic spectral characteristics.

  • 34.
    Bojarova, Jelena
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Gustafsson, Nils
    SMHI.
    Vignes, Ole
    A hybrid variational ensemble data assimilation for the HIgh Resolution Limited Area Model (HIRLAM)Manuscript (preprint) (Other academic)
  • 35.
    Bojarova, Jelena
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Sundberg, Rolf
    Stockholm University, Faculty of Science, Department of Mathematics.
    Non-Gaussian state space models in decomposition of ice core time series in long and short time-scales2010In: Environmetrics, ISSN 1180-4009, E-ISSN 1099-095X, Vol. 21, no 6, 562-587 p.Article in journal (Refereed)
    Abstract [en]

    Statistical modelling of six time series of geological ice core chemical data from Greenland is discussed. We decompose the total variation into long time-scale (trend) and short time-scale variations (fluctuations around the trend), and a pure noise component. Too heavy tails of the short-term variation makes a standard time-invariant linear Gaussian model inadequate. We try non-Gaussian state space models, which can be efficiently approximated by time-dependent Gaussian models. In essence, these time-dependent Gaussian models result in a local smoothing, in contrast to the global smoothing provided by the time-invariant model. To describe the mechanism of this local smoothing, we utilise the concept of a local variance function derived from a heavy-tailed density. The time-dependent error variance expresses the uncertainty about the dynamical development of the model state, and it controls the influence of observations on the estimates of the model state components. The great advantage of the derived time-dependent Gaussian model is that the Kalman filter and the Kalman smoother can be used as efficient computational tools for performing the variation decomposition. One of the main objectives of the study is to investigate how the distributional assumption on the model error component of the short time-scale variation affects the decomposition.

  • 36. Bollobas, Bela
    et al.
    Gunderson, Karen
    Holmgren, Cecilia
    Stockholm University, Faculty of Science, Department of Mathematics.
    Janson, Svante
    Przykucki, Michal
    Bootstrap percolation on Galton-Watson trees2014In: Electronic Journal of Probability, ISSN 1083-6489, Vol. 19, 1-27 p.Article in journal (Refereed)
    Abstract [en]

    Bootstrap percolation is a type of cellular automaton which has been used to model various physical phenomena, such as ferromagnetism. For each natural number r, the r-neighbour bootstrap process is an update rule for vertices of a graph in one of two states: 'infected' or 'healthy'. In consecutive rounds, each healthy vertex with at least r infected neighbours becomes itself infected. Percolation is said to occur if every vertex is eventually infected. Usually, the starting set of infected vertices is chosen at random, with all vertices initially infected independently with probability p. In that case, given a graph G and infection threshold r, a quantity of interest is the critical probability, p(c)(G, r), at which percolation becomes likely to occur. In this paper, we look at infinite trees and, answering a problem posed by Balogh, Peres and Pete, we show that for any b >= r and for any epsilon > 0 there exists a tree T with branching number br(T) = b and critical probability p(c)(T, r) < epsilon. However, this is false if we limit ourselves to the well-studied family of Galton-Watson trees. We show that for every r >= 2 there exists a constant c(r) > 0 such that if T is a Galton-Watson tree with branching number br(T) - b >= r then pc (T, r) > c(r)/b e(-b/r-1). We also show that this bound is sharp up to a factor of O (b) by giving an explicit family of Galton-Watson trees with critical probability bounded from above by C(r)e(-b/r-1) for some constant C-r > 0.

  • 37. Boqvist, S
    et al.
    Pettersson, H
    Svensson, Åke
    Stockholm University, Faculty of Science, Department of Mathematics.
    Andersson, Y
    Sources of sporadic Yersinia enterocilitica infection in children in Sweden, 2004: a case-control study2009In: Epidemiology and Infection, ISSN 0950-2688, E-ISSN 1469-4409, Vol. 137, 897-905 p.Article in journal (Refereed)
    Abstract [en]

     

    Young children account for a large proportion of reported

    Yersinia enterocolitica infections in Sweden with a high incidence compared with other gastrointestinal infections, such as salmonellosis and campylobacteriosis. A case-control study was conducted to investigate selected risk factors for domestic sporadic yersiniosis in children aged 0–6 years in Sweden. In total, 117 cases and 339 controls were included in the study. To minimize exclusion of observations due to missing data a multiple non-parametric imputation technique was used. The following risk factors were identified in the multivariate analysis : eating food prepared from raw pork products (OR 3.0, 95% CI 1.8–5.1) or treated sausage (OR 1.9, 95% CI 1.1–3.3), use of a baby’s dummy (OR 1.9, 95% CI 1.1–3.2) and contact with domestic animals (OR 2.0, 95% CI 1.2–3.4). We believe that the importance of Y. enterocolitica

    infection in children has been neglected and that results from this study can be used to develop preventive recommendations.

  • 38.
    Britton, Tom
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Ball, Frank
    An epidemic model with infector and exposure dependent severity2009In: Mathematical Biosciences, ISSN 0025-5564, E-ISSN 1879-3134, Vol. 218, no 2, 105-120 p.Article in journal (Refereed)
    Abstract [en]

    A stochastic epidemic model allowing for both mildly and severely infectious individuals is defined, where an individual can become severely infectious directly upon infection or if additionally exposed to infection. It is shown that, assuming a large community, the initial phase of the epidemic may be approximated by a suitable branching process and that the main part of an epidemic that becomes established admits a law of large numbers and a central limit theorem, leading to a normal approximation for the final outcome of such an epidemic. Effects of vaccination prior to an outbreak are studied and the critical vaccination coverage, above which only small outbreaks can occur, is derived. The results are illustrated by simulations that demonstrate that the branching process and normal approximations work well for finite communities, and by numerical examples showing that the final outcome may be close to discontinuous in certain model parameters and that the fraction mildly infected may actually increase as an effect of vaccination.

  • 39.
    Britton, Tom
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Lindholm, Mathias
    Uppsala universitet.
    Turova, Tatyana
    Lunds universitet.
    A dynamic network in a dynamic population: asymptotic properties2011In: Journal of Applied Probability, ISSN 0021-9002, E-ISSN 1475-6072, Vol. 48, 1163-1178 p.Article in journal (Refereed)
    Abstract [en]

    We derive asymptotic properties for a stochastic dynamic network model in a stochastic dynamic population. In the model, nodes give birth to new nodes until they die, each node being equipped with a social index given at birth. During the life of a node it creates edges to other nodes, nodes with high social index at higher rate, and edges disappear randomly in time. For this model, we derive a criterion for when a giant connected component exists after the process has evolved for a long period of time, assuming that the node population grows to infinity. We also obtain an explicit expression for the degree correlation rho (of neighbouring nodes) which shows that rho is always positive irrespective of parameter values in one of the two treated submodels, and may be either positive or negative in the other model, depending on the parameters.

  • 40.
    Britton, Tom
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Neal, Peter
    University of Manchester.
    The time to extinction for an SIS-household-epidemic model2010In: Journal of Mathematical Biology, ISSN 0303-6812, E-ISSN 1432-1416, Vol. 61, no 6, 763-769 p.Article in journal (Refereed)
    Abstract [en]

    We analyse a Markovian SIS epidemic amongst a finite population partitioned into households. Since the population is finite, the epidemic will eventually go extinct, i.e., have no more infectives in the population. We study the effects of population size and within household transmission upon the time to extinction. This is done through two approximations. The first approximation is suitable for all levels of within household transmission and is based upon an Ornstein-Uhlenbeck process approximation for the diseases fluctuations about an endemic level relying on a large population. The second approximation is suitable for high levels of within household transmission and approximates the number of infectious households by a simple homogeneously mixing SIS model with the households replaced by individuals. The analysis, supported by a simulation study, shows that the mean time to extinction is minimized by moderate levels of within household transmission.

  • 41.
    Britton, Tom
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Trapman, Pieter
    Stockholm University, Faculty of Science, Department of Mathematics.
    Maximizing the size of the giant2012In: Journal of Applied Probability, ISSN 0021-9002, E-ISSN 1475-6072, Vol. 49, no 4, 1156-1165 p.Article in journal (Refereed)
    Abstract [en]

    Consider a random graph where the mean degree is given and fixed. In this paper we derive the maximal size of the largest connected component in the graph. We also study the related question of the largest possible outbreak size of an epidemic occurring 'on' the random graph (the graph describing the social structure in the community). More precisely, we look at two different classes of random graphs. First, the Poissonian random graph in which each node i is given an independent and identically distributed (i.i.d.) random weight X-i with E(X-i) = mu, and where there is an edge between i and j with probability 1 - e(-XiXj/(mu n)), independently of other edges. The second model is the thinned configuration model in which then vertices of the ground graph have i.i.d. ground degrees, distributed as D, with E(D) = mu. The graph of interest is obtained by deleting edges independently with probability 1 - p. In both models the fraction of vertices in the largest connected component converges in probability to a constant 1 - q, where q depends on X or D and p. We investigate for which distributions X and D with given mu and p, 1 - q is maximized. We show that in the class of Poissonian random graphs, X should have all its mass at 0 and one other real, which can be explicitly determined. For the thinned configuration model, D should have all its mass at 0 and two subsequent positive integers.

  • 42. Broberg, Per
    et al.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Conditional estimation in two-stage adaptive designs2017In: Biometrics, ISSN 0006-341X, E-ISSN 1541-0420, Vol. 73, no 3, 895-904 p.Article in journal (Refereed)
    Abstract [en]

    We consider conditional estimation in two-stage sample size adjustable designs and the consequent bias. More specifically, we consider a design which permits raising the sample size when interim results look rather promising, and which retains the originally planned sample size when results look very promising. The estimation procedures reported comprise the unconditional maximum likelihood, the conditionally unbiased Rao-Blackwell estimator, the conditional median unbiased estimator, and the conditional maximum likelihood with and without bias correction. We compare these estimators based on analytical results and a simulation study. We show how they can be applied in a real clinical trial setting.

  • 43.
    Bruce, Daniel
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Optimal Design and Inference for Correlated Bernoulli Variables using a Simplified Cox Model2008Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis proposes a simplification of the model for dependent Bernoulli variables presented in Cox and Snell (1989). The simplified model, referred to as the simplified Cox model, is developed for identically distributed and dependent Bernoulli variables.

    Properties of the model are presented, including expressions for the loglikelihood function and the Fisher information. The special case of a bivariate symmetric model is studied in detail. For this particular model, it is found that the number of design points in a locally D-optimal design is determined by the log-odds ratio between the variables. Under mutual independence, both a general expression for the restrictions of the parameters and an analytical expression for locally D-optimal designs are derived.

    Focusing on the bivariate case, score tests and likelihood ratio tests are derived to test for independence. Numerical illustrations of these test statistics are presented in three examples. In connection to testing for independence, an E-optimal design for maximizing the local asymptotic power of the score test is proposed.

    The simplified Cox model is applied to a dental data. Based on the estimates of the model, optimal designs are derived. The analysis shows that these optimal designs yield considerably more precise parameter estimates compared to the original design. The original design is also compared against the E-optimal design with respect to the power of the score test. For most alternative hypotheses the E-optimal design provides a larger power compared to the original design.

  • 44.
    Bruce, Daniel
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Some properties for a simplified Cox binary model2008In: Communications in Statistics - Theory and Methods, ISSN 0361-0926, E-ISSN 1532-415X, Vol. 37, no 16, 2606-2616 p.Article in journal (Refereed)
    Abstract [en]

    This article proposes a simplification of the model for dependent binary variables presented in Cox and Snell (1989). The new model referred to as the simplified Cox model is developed for identically distributed and dependent binary variables. Properties of the model are presented, including expressions for the log-likelihood function and the Fisher information. Under mutual independence, a general expression for the restrictions of the parameters are derived. The simplified Cox model is illustrated using a data set from a clinical trial.

  • 45.
    Bruce, Daniel
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Some Properties for a Simplified Cox Binary Model2007In: Communications in statistics: Theory and methodsArticle in journal (Refereed)
    Abstract [en]

    This paper proposes a simplification of the model for dependent Bernoulli variables presented in Cox and Snell (1989). The new model referred to as the simplified Cox model is developed for identically distributed and dependent binary variables. Properties of the model are presented, including expressions for the log-likelihood function and the Fisher information. Under mutual independence, a general expression for the restrictions of the parameters are derived. The simplified Cox is illustrated using a data set from a clinical trial.

  • 46.
    Bruce, Daniel
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Nyquist, Hans
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Testing for dependency of Bernoulli variables2007In: International Journal of Statistical Sciences, ISSN 1683-5603, Vol. 5Article in journal (Refereed)
    Abstract [en]

    The aim of this paper is to derive test procedures for studies where data consist of pairs of Bernoulli variables. Applications exist in, for example, ophthalmology and studies on matched pairs. Score tests and likelihood ratio tests are derived for testing the dependency between the Bernoulli variables. Multinomial logit models are used to incorporate explanatory variables. Test statistics for two particular models are thoroughly outlined. Numerical illustrations of these test statistics are presented in three examples, including one with visual impairment data.

  • 47. Bystrova, Ksenia
    et al.
    Ivanova, Valentina
    Edhborg, Maigun
    Matthiesen, Ann-Sofi
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Ransjo-Arvidson, Anna-Berit
    Mukhamedrakhimov, Rifkat
    Uvnas-Moberg, Kerstin
    Widstrom, Ann-Marie
    Early Contact versus Separation: Effects on Mother-Infant Interaction One Year Later2009In: Birth, ISSN 0730-7659, E-ISSN 1523-536X, Vol. 36, no 2, 97-109 p.Article in journal (Refereed)
    Abstract [en]

    Background: A tradition of separation of the mother and baby after birth still persists in many parts of the world, including some parts of Russia, and often is combined with swaddling of the baby. The aim of this study was to evaluate and compare possible long-term effects on mother-infant interaction of practices used in the delivery and maternity wards, including practices relating to mother-infant closeness versus separation. Methods: A total of 176 mother-infant pairs were randomized into four experimental groups: Group I infants were placed skin-to-skin with their mothers after birth, and had rooming-in while in the maternity ward. Group II infants were dressed and placed in their mothers' arms after birth, and roomed-in with their mothers in the maternity ward. Group III infants were kept in the nursery both after birth and while their mothers were in the maternity ward. Group IV infants were kept in the nursery after birth, but roomed-in with their mothers in the maternity ward. Equal numbers of infants were either swaddled or dressed in baby clothes. Episodes of early suckling in the delivery ward were noted. The mother-infant interaction was videotaped according to the Parent-Child Early Relational Assessment (PCERA) 1 year after birth. Results: The practice of skin-to-skin contact, early suckling, or both during the first 2 hours after birth when compared with separation between the mothers and their infants positively affected the PCERA variables maternal sensitivity, infant's self-regulation, and dyadic mutuality and reciprocity at 1 year after birth. The negative effect of a 2-hour separation after birth was not compensated for by the practice of rooming-in. These findings support the presence of a period after birth (the early ""sensitive period"") during which close contact between mother and infant may induce long-term positive effect on mother-infant interaction. In addition, swaddling of the infant was found to decrease the mother's responsiveness to the infant, her ability for positive affective involvement with the infant, and the mutuality and reciprocity in the dyad. Conclusions: Skin-to-skin contact, for 25 to 120 minutes after birth, early suckling, or both positively influenced mother-infant interaction 1 year later when compared with routines involving separation of mother and infant.

  • 48. Camitz, Martin
    et al.
    Svensson, Åke
    Stockholm University, Faculty of Science, Department of Mathematics.
    The effect of time distribution shape on a complex epidemic model2009In: Bulletin of Mathematical Biology, ISSN 0092-8240, E-ISSN 1522-9602, Vol. 71, no 8, 1902-1913 p.Article in journal (Refereed)
    Abstract [en]

    In elaborating a model of the progress of an epidemic, it is necessary to make assumptions about the distributions of latency times and infectious times. In many models, the often implicit assumption is that these times are independent and exponentially distributed. We explore the effects of altering the distribution of latency and infectious times in a complex epidemic model with regional divisions connected by a travel intensity matrix. We show a delay in spread with more realistic latency times. More realistic infectiousness times lead to faster epidemics. The effects are similar but accentuated when compared to a purely homogeneous mixing model.

  • 49.
    Carlson, Michael
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    A Per-Record Risk of Disclosure Using a Poisson-Inverse Gaussian Regression Model2002Report (Other academic)
    Abstract [en]

    Per-record measures of disclosure risk have potential uses in statistical disclosure control programs as a means of identifying sensitive or atypical records in public-use microdata files. A measure intended for sample data based on the Poisson-inverse Gaussian distribution and overdispersed log-linear modeling is presented. An empirical example indicates that the proposed model performs approximately as well as the Poisson-lognormal model of Skinner and Holmes (1998) and may be a tractable alternative as the required computational effort is significantly smaller. It is also demonstrated how to extend the application to take into account population level information. The empirical results indicate that using population level information sharpens the risk measure.

  • 50.
    Carlson, Michael
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    An Empirical Comparison of Some Methods for Disclosure Risk Assessment2005Conference paper (Other academic)
1234567 1 - 50 of 390
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf