Change search
Refine search result
1 - 29 of 29
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Tsirpitzi, Renata Eirini
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics. Linköping University, Sweden.
    Burman, Carl-Fredrik
    Robust optimal designs using a model misspecification term2023In: Metrika (Heidelberg), ISSN 0026-1335, E-ISSN 1435-926XArticle in journal (Refereed)
    Abstract [en]

    Much of classical optimal design theory relies on specifying a model with only a small number of parameters. In many applications, such models will give reasonable approximations. However, they will often be found not to be entirely correct when enough data are at hand. A property of classical optimal design methodology is that the amount of data does not influence the design when a fixed model is used. However, it is reasonable that a low dimensional model is satisfactory only if limited data is available. With more data available, more aspects of the underlying relationship can be assessed. We consider a simple model that is not thought to be fully correct. The model misspecification, that is, the difference between the true mean and the simple model, is explicitly modeled with a stochastic process. This gives a unified approach to handle situations with both limited and rich data. Our objective is to estimate the combined model, which is the sum of the simple model and the assumed misspecification process. In our situation, the low-dimensional model can be viewed as a fixed effect and the misspecification term as a random effect in a mixed-effects model. Our aim is to predict within this model. We describe how we minimize the prediction error using an optimal design. We compute optimal designs for the full model in different cases. The results confirm that the optimal design depends strongly on the sample size. In low-information situations, traditional optimal designs for models with a small number of parameters are sufficient, while the inclusion of the misspecification term lead to very different designs in data-rich cases. 

  • 2.
    ul Hassan, Mahmood
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Discrimination with unidimensional and multidimensional item response theory models for educational data2022In: Communications in statistics. Simulation and computation, ISSN 0361-0918, E-ISSN 1532-4141, Vol. 51, no 6, p. 2992-3012Article in journal (Refereed)
    Abstract [en]

    Achievement tests are used to characterize the proficiency of higher-education students. Item response theory (IRT) models are applied to these tests to estimate the ability of students (as latent variable in the model). In order for quality IRT parameters to be estimated, especially ability parameters, it is important that the appropriate number of dimensions is identified. Through a case study, based on a statistics exam for students in higher education, we show how dimensions and other model parameters can be chosen in a real situation. Our model choice is based both on empirical and on background knowledge of the test. We show that dimensionality influences the estimates of the item-parameters, especially the discrimination parameter which provides information about the quality of the item. We perform a simulation study to generalize our conclusions. Both the simulation study and the case study show that multidimensional models have the advantage to better discriminate between examinees. We conclude from the simulation study that it is safer to use a multidimensional model compared to a unidimensional if it is unknown which model is the correct one.

  • 3.
    Ul Hassan, Mahmood
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    An exchange algorithm for optimal calibration of items in computerized achievement tests2021In: Computational Statistics & Data Analysis, ISSN 0167-9473, E-ISSN 1872-7352, Vol. 157, article id 107177Article in journal (Refereed)
    Abstract [en]

    The importance of large scale achievement tests, like national tests in school, eligibility tests for university, or international assessments for evaluation of students, is increasing. Pretesting of questions for the above mentioned tests is done to determine characteristic properties of the questions by adding them to an ordinary achievement test. If computerized tests are used, it has been shown using optimal experimental design methods that it is efficient to assign pretest questions to examinees based on their abilities. The specific distribution of abilities of the available examinees are considered and restricted optimal designs are applied. A new algorithm is developed which builds on an equivalence theorem. It discretizes the design space with the possibility to change the grid adaptively during the run, makes use of an exchange idea and filters computed designs. It is illustrated how the algorithm works through some examples as well as how convergence can be checked. The new algorithm is flexible and can be used even if different models are assumed for different questions.

    Download full text (pdf)
    fulltext
  • 4.
    Bjermo, Jonas
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Efficient Estimation of Mean Ability Growth Using Vertical Scaling2021In: Applied measurement in education, ISSN 0895-7347, E-ISSN 1532-4818, Vol. 34, no 3, p. 163-178Article in journal (Refereed)
    Abstract [en]

    In recent years, the interest in measuring growth in student ability in various subjects between different grades in school has increased. Therefore, good precision in the estimated growth is of importance. This paper aims to compare estimation methods and test designs when it comes to precision and bias of the estimated growth of mean ability between two groups of students that differ substantially. This is performed by a simulation study. One- and two-parameter item response models are assumed and the estimated abilities are vertically scaled using the non-equivalent anchor test design by estimating the abilities in one single run, so-called concurrent calibration. The connection between the test design and the Fisher information is also discussed. The results indicate that the expected a posteriori estimation method is preferred when estimating differences in mean ability between groups. Results also indicate that a test design with common items of medium difficulty leads to better precision, which coincides with previous results from horizontal equating.

  • 5.
    Tsirpitzi, Renata Eirini
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Optimal dose-finding for efficacy-safety models2021In: Biometrical Journal, ISSN 0323-3847, E-ISSN 1521-4036, Vol. 63, no 6, p. 1185-1201Article in journal (Refereed)
    Abstract [en]

    Dose-finding is an important part of the clinical development of a new drug. The purpose of dose-finding studies is to determine a suitable dose for future development based on both efficacy and safety. Optimal experimental designs have already been used to determine the design of this kind of studies, however, often that design is focused on efficacy only. We consider an efficacy-safety model, which is a simplified version of the bivariate Emax model. We use here the clinical utility index concept, which provides the desirable balance between efficacy and safety. By maximizing the utility of the patients, we get the estimated dose. This desire leads us to locally c-optimal designs. An algebraic solution for c-optimal designs is determined for arbitrary c vectors using a multivariate version of Elfving's method. The solution shows that the expected therapeutic index of the drug is a key quantity determining both the number of doses, the doses itself, and their weights in the optimal design. A sequential design is proposed to solve the complication of parameter dependency, and it is illustrated in a simulation study.

  • 6.
    Ul Hassan, Mahmood
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Optimal Item Calibration for Computerized Achievement Tests2019In: Psychometrika, ISSN 0033-3123, E-ISSN 1860-0980, Vol. 84, no 4, p. 1101-1128Article in journal (Refereed)
    Abstract [en]

    Item calibration is a technique to estimate characteristics of questions (called items) for achievement tests. In computerized tests, item calibration is an important tool for maintaining, updating and developing new items for an item bank. To efficiently sample examinees with specific ability levels for this calibration, we use optimal design theory assuming that the probability to answer correctly follows an item response model. Locally optimal unrestricted designs have usually a few design points for ability. In practice, it is hard to sample examinees from a population with these specific ability levels due to unavailability or limited availability of examinees. To counter this problem, we use the concept of optimal restricted designs and show that this concept naturally fits to item calibration. We prove an equivalence theorem needed to verify optimality of a design. Locally optimal restricted designs provide intervals of ability levels for optimal calibration of an item. When assuming a two-parameter logistic model, several scenarios with D-optimal restricted designs are presented for calibration of a single item and simultaneous calibration of several items. These scenarios show that the naive way to sample examinees around unrestricted design points is not optimal.

    Download full text (pdf)
    fulltext
  • 7.
    Miller, Frank
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Burman, Carl-Fredrik
    A decision theoretical modeling for Phase III investments and drug licensing2018In: Journal of Biopharmaceutical Statistics, ISSN 1054-3406, E-ISSN 1520-5711, Vol. 28, no 4, p. 698-721Article in journal (Refereed)
    Abstract [en]

    For a new candidate drug to become an approved medicine, several decision points have to be passed. In this article, we focus on two of them: First, based on Phase II data, the commercial sponsor decides to invest (or not) in Phase III. Second, based on the outcome of Phase III, the regulator determines whether the drug should be granted market access. Assuming a population of candidate drugs with a distribution of true efficacy, we optimize the two stakeholders' decisions and study the interdependence between them. The regulator is assumed to seek to optimize the total public health benefit resulting from the efficacy of the drug and a safety penalty. In optimizing the regulatory rules, in terms of minimal required sample size and the Type I error in Phase III, we have to consider how these rules will modify the commercial optimization made by the sponsor. The results indicate that different Type I errors should be used depending on the rarity of the disease.

  • 8.
    Miller, Frank
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Zohar, Sarah
    Stallard, Nigel
    Madan, Jason
    Posch, Martin
    Hee, Siew Wan
    Pearce, Michael
    Vågerö, Mårten
    Day, Simon
    Approaches to sample size calculation for clinical trials in rare diseases2018In: Pharmaceutical statistics, ISSN 1539-1604, E-ISSN 1539-1612, Vol. 17, no 3, p. 214-230Article in journal (Refereed)
    Abstract [en]

    We discuss 3 alternative approaches to sample size calculation: traditional sample size calculation based on power to show a statistically significant effect, sample size calculation based on assurance, and sample size based on a decision-theoretic approach. These approaches are compared head-to-head for clinical trial situations in rare diseases. Specifically, we consider 3 case studies of rare diseases (Lyell disease, adult-onset Still disease, and cystic fibrosis) with the aim to plan the sample size for an upcoming clinical trial. We outline in detail the reasonable choice of parameters for these approaches for each of the 3 case studies and calculate sample sizes. We stress that the influence of the input parameters needs to be investigated in all approaches and recommend investigating different sample size approaches before deciding finally on the trial size. Highly influencing for the sample size are choice of treatment effect parameter in all approaches and the parameter for the additional cost of the new treatment in the decision-theoretic approach. These should therefore be discussed extensively.

  • 9. Posch, Martin
    et al.
    Klinglmueller, Florian
    König, Franz
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Estimation after blinded sample size reassessment2018In: Statistical Methods in Medical Research, ISSN 0962-2802, E-ISSN 1477-0334, Vol. 27, no 6, p. 1830-1846Article in journal (Refereed)
    Abstract [en]

    Blinded sample size reassessment is a popular means to control the power in clinical trials if no reliable information on nuisance parameters is available in the planning phase. We investigate how sample size reassessment based on blinded interim data affects the properties of point estimates and confidence intervals for parallel group superiority trials comparing the means of a normal endpoint. We evaluate the properties of two standard reassessment rules that are based on the sample size formula of the z-test, derive the worst case reassessment rule that maximizes the absolute mean bias and obtain an upper bound for the mean bias of the treatment effect estimate.

  • 10. Friede, Tim
    et al.
    Posch, Martin
    Zohar, Sarah
    Alberti, Corinne
    Benda, Norbert
    Comets, Emmanuelle
    Day, Simon
    Dmitrienko, Alex
    Graf, Alexandra
    Guenhan, Burak Kuersad
    Hee, Siew Wan
    Lentz, Frederike
    Madan, Jason
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Ondra, Thomas
    Pearce, Michael
    Roever, Christian
    Toumazi, Artemis
    Unkel, Steffen
    Ursino, Moreno
    Wassmer, Gernot
    Stallard, Nigel
    Recent advances in methodology for clinical trials in small populations: the InSPiRe project2018In: Orphanet Journal of Rare Diseases, ISSN 1750-1172, E-ISSN 1750-1172, Vol. 13, article id 186Article, review/survey (Refereed)
    Abstract [en]

    Where there are a limited number of patients, such as in a rare disease, clinical trials in these small populations present several challenges, including statistical issues. This led to an EU FP7 call for proposals in 2013. One of the three projects funded was the Innovative Methodology for Small Populations Research (InSPiRe) project. This paper summarizes the main results of the project, which was completed in 2017. The InSPiRe project has led to development of novel statistical methodology for clinical trials in small populations in four areas. We have explored new decision-making methods for small population clinical trials using a Bayesian decision-theoretic framework to compare costs with potential benefits, developed approaches for targeted treatment trials, enabling simultaneous identification of subgroups and confirmation of treatment effect for these patients, worked on early phase clinical trial design and on extrapolation from adult to pediatric studies, developing methods to enable use of pharmacokinetics and pharmacodynamics data, and also developed improved robust meta-analysis methods for a small number of trials to support the planning, analysis and interpretation of a trial as well as enabling extrapolation between patient groups. In addition to scientific publications, we have contributed to regulatory guidance and produced free software in order to facilitate implementation of the novel methods.

  • 11. Pearce, Michael
    et al.
    Hee, Siew Wan
    Madan, Jason
    Posch, Martin
    Day, Simon
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Zohar, Sarah
    Stallard, Nigel
    Value of information methods to design a clinical trial in a small population to optimise a health economic utility function2018In: BMC Medical Research Methodology, ISSN 1471-2288, E-ISSN 1471-2288, Vol. 18, article id 20Article in journal (Refereed)
    Abstract [en]

    Background: Most confirmatory randomised controlled clinical trials (RCTs) are designed with specified power, usually 80% or 90%, for a hypothesis test conducted at a given significance level, usually 2.5% for a one-sided test. Approval of the experimental treatment by regulatory agencies is then based on the result of such a significance test with other information to balance the risk of adverse events against the benefit of the treatment to future patients. In the setting of a rare disease, recruiting sufficient patients to achieve conventional error rates for clinically reasonable effect sizes may be infeasible, suggesting that the decision-making process should reflect the size of the target population. Methods: We considered the use of a decision-theoretic value of information (VOI) method to obtain the optimal sample size and significance level for confirmatory RCTs in a range of settings. We assume the decision maker represents society. For simplicity we assume the primary endpoint to be normally distributed with unknown mean following some normal prior distribution representing information on the anticipated effectiveness of the therapy available before the trial. The method is illustrated by an application in an RCT in haemophilia A. We explicitly specify the utility in terms of improvement in primary outcome and compare this with the costs of treating patients, both financial and in terms of potential harm, during the trial and in the future. Results: The optimal sample size for the clinical trial decreases as the size of the population decreases. For non-zero cost of treating future patients, either monetary or in terms of potential harmful effects, stronger evidence is required for approval as the population size increases, though this is not the case if the costs of treating future patients are ignored. Conclusions: Decision-theoretic VOI methods offer a flexible approach with both type I error rate and power (or equivalently trial sample size) depending on the size of the future population for whom the treatment under investigation is intended. This might be particularly suitable for small populations when there is considerable information about the patient population.

  • 12. Broberg, Per
    et al.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Conditional estimation in two-stage adaptive designs2017In: Biometrics, ISSN 0006-341X, E-ISSN 1541-0420, Vol. 73, no 3, p. 895-904Article in journal (Refereed)
    Abstract [en]

    We consider conditional estimation in two-stage sample size adjustable designs and the consequent bias. More specifically, we consider a design which permits raising the sample size when interim results look rather promising, and which retains the originally planned sample size when results look very promising. The estimation procedures reported comprise the unconditional maximum likelihood, the conditionally unbiased Rao-Blackwell estimator, the conditional median unbiased estimator, and the conditional maximum likelihood with and without bias correction. We compare these estimators based on analytical results and a simulation study. We show how they can be applied in a real clinical trial setting.

  • 13. Stallard, Nigel
    et al.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Day, Simon
    Hee, Siew Wan
    Madan, Jason
    Zohar, Sarah
    Posch, Martin
    Determination of the optimal sample size for a clinical trial accounting for the population size2017In: Biometrical Journal, ISSN 0323-3847, E-ISSN 1521-4036, Vol. 59, no 4, p. 609-625Article in journal (Refereed)
    Abstract [en]

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflectthe size of the population under consideration. Incorporation of the population size is possible in adecision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N^1/2) or O(N∗^1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes.

  • 14. Hee, Siew Wan
    et al.
    Willis, Adrian
    Smith, Catrin Tudur
    Day, Simon
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Madan, Jason
    Posch, Martin
    Zohar, Sarah
    Stallard, Nigel
    Does the low prevalence affect the sample size of interventional clinical trials of rare diseases? An analysis of data from the aggregate analysis of clinicaltrials.gov2017In: Orphanet Journal of Rare Diseases, ISSN 1750-1172, E-ISSN 1750-1172, Vol. 12, article id 44Article in journal (Refereed)
    Abstract [en]

    Background: Clinical trials are typically designed using the classical frequentist framework to constrain type I and II error rates. Sample sizes required in such designs typically range from hundreds to thousands of patients which can be challenging for rare diseases. It has been shown that rare disease trials have smaller sample sizes than non-rare disease trials. Indeed some orphan drugs were approved by the European Medicines Agency based on studies with as few as 12 patients. However, some studies supporting marketing authorisation included several hundred patients. In this work, we explore the relationship between disease prevalence and other factors and the size of interventional phase 2 and 3 rare disease trials conducted in the US and/or EU. We downloaded all clinical trials from Aggregate Analysis of ClinialTrials.gov (AACT) and identified rare disease trials by cross-referencing MeSH terms in AACT with the list from Orphadata. We examined the effects of prevalence and phase of study in a multiple linear regression model adjusting for other statistically significant trial characteristics. Results: Of 186941 ClinicalTrials.gov trials only 1567 (0.8%) studied a single rare condition with prevalence information from Orphadata. There were 19 (1.2%) trials studying disease with prevalence <1/1,000,000, 126 (8.0%) trials with 1-9/1,000,000, 791 (50.5%) trials with 1-9/100,000 and 631 (40.3%) trials with 1-5/10,000. Of the 1567 trials, 1160 (74%) were phase 2 trials. The fitted mean sample size for the rarest disease ( prevalence <1/1,000,000) in phase 2 trials was the lowest ( mean, 15.7; 95% CI, 8.7-28.1) but were similar across all the other prevalence classes; mean, 26.2 ( 16.1-42.6), 33. 8 (22.1-51.7) and 35.6 (23.3-54.3) for prevalence 1-9/1,000,000, 1-9/100,000 and 1-5/10,000, respectively. Fitted mean size of phase 3 trials of rarer diseases, <1/1,000,000 (19.2, 6.9-53.2) and 1-9/1,000,000 (33.1, 18.6-58.9), were similar to those in phase 2 but were statistically significant lower than the slightly less rare diseases, 1-9/100,000 (75.3, 48.2-117.6) and 1-5/10,000 (77.7, 49.6-121.8), trials. Conclusions: We found that prevalence was associated with the size of phase 3 trials with trials of rarer diseases noticeably smaller than the less rare diseases trials where phase 3 rarer disease ( prevalence <1/100,000) trials were more similar in size to those for phase 2 but were larger than those for phase 2 in the less rare disease ( prevalence >= 1/100,000) trials.

  • 15. Hee, Siew Wan
    et al.
    Hamborg, Thomas
    Day, Simon
    Madan, Jason
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Posch, Martin
    Zohar, Sarah
    Stallard, Nigel
    Decision-theoretic designs for small trials and pilot studies: A review2016In: Statistical Methods in Medical Research, ISSN 0962-2802, E-ISSN 1477-0334, Vol. 25, no 3, p. 1022-1038Article, review/survey (Refereed)
    Abstract [en]

    Pilot studies and other small clinical trials are often conducted but serve a variety of purposes and there is little consensus on their design. One paradigm that has been suggested for the design of such studies is Bayesian decision theory. In this article, we review the literature with the aim of summarizing current methodological developments in this area. We find that decision-theoretic methods have been applied to the design of small clinical trials in a number of areas. We divide our discussion of published methods into those for trials conducted in a single stage, those for multi-stage trials in which decisions are made through the course of the trial at a number of interim analyses, and those that attempt to design a series of clinical trials or a drug development programme. In all three cases, a number of methods have been proposed, depending on the decision maker’s perspective being considered and the details of utility functions that are used to construct the optimal design.

  • 16. Ondra, Thomas
    et al.
    Dmitrienko, Alex
    Friede, Tim
    Graf, Alexandra
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Stallard, Nigel
    Posch, Martin
    Methods for identification and confirmation of targeted subgroups in clinical trials: A systematic review2016In: Journal of Biopharmaceutical Statistics, ISSN 1054-3406, E-ISSN 1520-5711, Vol. 26, no 1, p. 99-119Article in journal (Refereed)
    Abstract [en]

    Important objectives in the development of stratified medicines include the identification and confirmation of subgroups of patients with a beneficial treatment effect and a positive benefit-risk balance. We report the results of a literature review on methodological approaches to the design and analysis of clinical trials investigating a potential heterogeneity of treatment effects across subgroups. The identified approaches are classified based on certain characteristics of the proposed trial designs and analysis methods. We distinguish between exploratory and confirmatory subgroup analysis, frequentist, Bayesian and decision-theoretic approaches and, last, fixed-sample, group-sequential, and adaptive designs and illustrate the available trial designs and analysis strategies with published case studies.

  • 17.
    Fackle-Fornius, Ellinor
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Nyquist, Hans
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Implementation of maximin efficient designs in dose-finding studies2015In: Pharmaceutical statistics, ISSN 1539-1604, E-ISSN 1539-1612, Vol. 14, no 1, p. 63-73Article in journal (Refereed)
    Abstract [en]

    This paper considers the maximin approach for designing clinical studies. A maximin efficient design maximizes the smallest efficiency when compared with a standard design, as the parameters vary in a specified subset of the parameter space. To specify this subset of parameters in a real situation, a four-step procedure using elicitation based on expert opinions is proposed. Further, we describe why and how we extend the initially chosen subset of parameters to a much larger set in our procedure. By this procedure, the maximin approach becomes feasible for dose-finding studies. Maximin efficient designs have shown to be numerically difficult to construct. However, a new algorithm, the H-algorithm, considerably simplifies the construction of these designs.We exemplify the maximin efficient approach by considering a sigmoid Emax model describing a dose–response relationship and compare inferential precision with that obtained when using a uniform design. The design obtained is shown to be at least 15% more efficient than the uniform design.

  • 18.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    When is an adaptive design useful in clinical dose-finding trials?2015In: Festschrift in Honor of Hans Nyquist on the occasion of his 65th birthday / [ed] Ellinor Fackle-Fornius, Stockholm: Stockholm University, 2015, p. 28-43Chapter in book (Other academic)
    Abstract [en]

    During the development process for new drugs, dose-finding trials have to be conducted and the choice of their design is an important issue. Traditionally, the standard design is a balanced design where equally large groups of patients are treated with different doses of the new drug or with a control. However, it has been identified that other innovative designs might be more efficient: Optimal designs which use non-balanced allocation to dose, and adaptive designs where the allocation to the doses can be changed during the study based on results collected earlier in the study. In a simulation study we will compare efficiencies of balanced non-adaptive, optimal non-adaptive, adaptive two-stage and fully sequential adaptive designs.  In all situations considered one can gain from applying optimal design theory. However, when moving from the optimal non-adaptive design to an adaptive design, there are  situations where the design is improved and other situations where there is only a minor or no gain. Based on our considered situations, we generalize our observations to answer when an adaptive design is useful.

    Download full text (pdf)
    fulltext
  • 19.
    Miller, Frank
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics. AstraZeneca, Sweden.
    Björnsson, Marcus
    Svensson, Ola
    Karlsten, Rolf
    Experiences with an adaptive design for a dose-finding study in patients with osteoarthritis2014In: Contemporary Clinical Trials, ISSN 1551-7144, E-ISSN 1559-2030, Vol. 37, no 2, p. 189-199Article in journal (Refereed)
    Abstract [en]

    Dose-finding studies in non-oncology areas are usually conducted in Phase II of the development process of a new potential medicine and it is key to choose a good design for such a study, as the results will decide if and how to proceed to Phase III. The present article has focus on the design of a dose-finding study for pain in osteoarthritis patients treated with the TRPV1 antagonist AZD1386. We describe different design alternatives in the planning of this study, the reasoning for choosing the adaptive design and experiences with conduct and interim analysis. Three alternatives were proposed: one single dose-finding study with parallel design, a programme with a smaller Phase Ila study followed by a Phase Ilb dose-finding study, and an adaptive dose-finding study. We describe these alternatives in detail and explain why the adaptive design was chosen for the study. We give insights in design aspects of the adaptive study, which need to be pre-planned, like interim decision criteria, statistical analysis method and setup of a Data Monitoring Committee. Based on the interim analysis it was recommended to stop the study for futility since AZD1386 showed no significant pain decrease based on the primary variable. We discuss results and experiences from the conduct of the study with the novel design approach. Huge cost savings have been done compared to if the option with one dose-finding design for Phase II had been chosen. However, we point out several challenges with this approach.

  • 20.
    Miller, Frank
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Friede, Tim
    Sample size re-estimation and continuous monitoring of the variance in longitudinal trials2014In: Adaptive Designs & Multiple Testing Procedures, 2014, p. 21-21Conference paper (Other academic)
    Abstract [en]

    In many clinical trials, frequent longitudinal data is collected from each patient. For example in chronic pain trials, daily pain measurements of the patients can be collected during several weeks which leads to a large number of highly correlated post-baseline measurements for each patient.

    Blinded sample size re-estimation or continuous monitoring of the variance (Friede and Miller, 2012) can deal with situations where uncertainty regarding the true variance exists. In trials with longitudinal data, the situation is common that at interim looks a restricted number of patients have completed the study but a large number has started treatment and first post-baseline data is collected but endpoint data is not yet available. Nevertheless, it is reasonable that the partial data available from these patients gives useful information about the variance of the endpoint (Wüst and Kieser, 2003; Wachtlin and Kieser, 2013).

    In this talk, we first quantify the gain of including partial data from patients when estimating the variance. Variability of sample size is often reduced but the amount of reduction depends on the correlation between measurements. Then, our main interest is to investigate the usefulness of a parametric model assumption for the covariance structure. We quantify the gain from the model assumption when the assumed model is correct and discuss consequences when a wrong model is assumed.

  • 21.
    Miller, Frank
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Fackle-Fornius, Ellinor
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Nyquist, Hans
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Maximin Efficient Designs for Estimating the Interesting Part of a Dose-Effect Curve2013In: 6th International Conference of the ERCIM (European Research Consortium for Informatics and Mathematics) Working Group on Computational and Methodological Statistics (ERCIM 2013):  , 2013Conference paper (Other academic)
    Abstract [en]

    As the costs of clinical studies increase, the demand for more efficient designs also increases. Therefore, there is a growing interest in introducing designs that optimize precision in clinical studies. Unfortunately, optimal designs generally require knowledge of unknown parameters. We consider the maximin approach to handle this problem. A maximin efficient design maximizes the efficiency when compared to a standard design, as the parameters vary in a specified subset of the parameter space. Maximin efficient designs have shown to be numerically difficult to construct. However, a new algorithm, the H-algorithm, considerably simplifies the construction of these designs. We exemplify the maximin efficient approach by considering an Emax-sigmoid model describing a dose-response relationship and compare inferential precision with that obtained when using a uniform design. In a first approach to construct a maximin efficient design we specify a number of possible scenarios, each of which describing a possible shape of the dose-response relation. The design obtained is shown to be at least 15 percent more efficient than the uniform design. It is then shown that the obtained design is maximin efficient also for a much larger parameter set defined by parameter values between those specified by the initial scenarios.

  • 22. Karin, Alexandra
    et al.
    Hannesdottir, Kristin
    Jaeger, Judith
    Annas, Peter
    Segerdahl, Märta
    Karlsson, Pär
    Sjögren, Niclas
    von Rosen, Tatjana
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Psychometric evaluation of ADAS-Cog and NTB for measuring drug response:  2013In: Acta Neurologica Scandinavica, ISSN 0001-6314, E-ISSN 1600-0404, Vol. 129, no 2, p. 113-122Article in journal (Refereed)
    Abstract [en]

    AIMS: To conduct a psychometric analysis to determine the adequacy of instruments that measure cognition in Alzheimer's disease trials.

    BACKGROUND: Both the Alzheimer's Disease Assessment Scale - Cognition (ADAS-Cog) and the Neuropsychological Test Battery (NTB) are validated outcome measures for clinical trials in Alzheimer's disease and are approved also for regulatory purposes. However, it is not clear how comparable they are in measuring cognitive function. In fact, many recent trials in Alzheimer's disease patients have failed and it has been questioned if ADAS-Cog still is a sensitive measure.

    MATERIALS AND METHODS: The present paper examines the psychometric properties of ADAS-Cog and NTB, based on a post hoc analysis of data from a clinical trial (NCT01024660), which was conducted by AstraZeneca, in mild-to-moderate Alzheimer's disease (AD) patients, with a Mini Mental State Examination (MMSE) Total score 16-24. Acceptability, reliability, different types of validity and ability to detect change were assessed using relevant statistical methods. Total scores of both tests, as well as separate domains of both tests, including the Wechsler Memory Scale (WMS), Rey Auditory Verbal Learning Test (RAVLT) and Delis-Kaplan Executive Function System (D-KEFS) Verbal Fluency Condition, were analyzed.

    RESULTS: Overall, NTB performed well, with acceptable reliability and ability to detect change, while ADAS-Cog had insufficient psychometric properties, including ceiling effects in 8 out of a total of 11 ADAS-Cog items in mild AD patients, as well as low test-retest reliability in some of the items.

    DISCUSSION: Based on a direct comparison on the same patient sample, we see advantages of the NTB compared with the ADAS-Cog for the evaluation of cognitive function in the population of mild-to-moderate AD patients. The results suggest that not all of ADAS-Cog items are relevant for both mild and moderate AD population.

    CONCLUSIONS: This validation study demonstrates satisfactory psychometric properties of the NTB, while ADAS-Cog was found to be psychometrically inadequate.

  • 23.
    Ul Hassan, Mahmood
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    An exchange algorithm for optimal calibration of  items in computerized achievement testsManuscript (preprint) (Other academic)
    Abstract [en]

    The importance of large scale achievement tests, like national tests in school, eligibility tests for university, or international assessments for evaluation of students, is increasing. Pretesting of questions for the above mentioned tests is done to determine characteristic properties of the questions by adding them to an ordinary achievement test. If computerized tests are used, it has been shown using optimal experimental design methods that it is efficient to assign pretest questions to examinees based on their abilities. We can consider the specific distribution of abilities of the available examinees and apply restricted optimal designs.A previously used algorithm optimizes the criterion directly. We develop here a new algorithm which builds on an equivalence theorem. It discretizises the design space with the possibility to change the grid during the run, makes use of an exchange idea and filters computed designs. We illustrate how the algorithm works in some examples and how convergence can be checked. We show that this new algorithm can be used flexibly even if different models are assumed for different questions.

  • 24.
    Ul Hassan, Mahmood
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Optimal calibration of items for multidimensional achievement testsManuscript (preprint) (Other academic)
    Abstract [en]

    Multidimensional achievement tests are recently gaining more importance in educational and psychological measurements due to  diagnostic nature. Diagnostic pretests help the organization to assist the students in determining which ability needs to be improved from particular domain of knowledge for better performance in the test. To develop diagnostic pretest items for multidimensional achievement tests, we generalize the previously developed exchange algorithm in multidimensional setting. We also develop an asymptotic theorem which helps us to choose an item at extreme ability levels to sample the examinees.

  • 25.
    Tsirpitzi, Renata Eirini
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Burman, Carl-Fredrik
    Optimal designs for distorted regression modelsManuscript (preprint) (Other academic)
    Abstract [en]

    Much of traditional optimal design theory relies on specifying a model with only a small number of parameters. In many applications, such models will give reasonable approximations. However, they will often be found not to be entirely correct when enough data are at hand. We consider a low-dimensional model with a distortion term. Our objective is to estimate the combined model, including the distortion. In our situation, the low-dimensional model can be viewed as a fixed effect and the distortion term as a random effect in a mixed-effects model. Since we are interested in estimating the combination of fixed and random effects, our aim is to predict within the mixed model. We describe how we minimize the prediction error using an optimal design by constructing the Best Linear Unbiased Estimator and Predictor in our model. Many algorithms can be used in order to construct an optimal design. We apply here the Fedorov algorithm, which exchanges observations between the design points. By performing the algorithm built on the distorted model, we present the optimal design in different cases. The results indicate that the optimal design depends strongly on the sample size. In low-information situations, optimal designs are sufficient, while distorted terms produce better designs in data-rich cases.

  • 26.
    Tsirpitzi, Renata Eirini
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Optimal dose-finding for drug combinationsManuscript (preprint) (Other academic)
    Abstract [en]

    Optimal experimental designs are an essential part of clinical development of a drug and are used to determine the design of dose-finding studies. However, often, these designs for dose-finding aim on one drug. We consider an efficacy Emax model for the combination of two drugs. The interaction is characterized as synergy if it is positive, antagonism when it is negative and additive when there is no interaction between the two drugs. We calculate D-optimal designs algebraically and numerically. The solutions show that the number of doses, the doses itself, and their weights in the D-optimal design depend on the parameter values in the model, ED50x, ED50y and the interaction term γ.

  • 27.
    Tsirpitzi, Renata Eirini
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Optimal dose-finding for efficacy-safety-models with placebo effectsManuscript (preprint) (Other academic)
    Abstract [en]

    The knowledge of how good or optimal designs look like is essential for dose-finding trials. In many cases, dose-finding trials consider both efficacy and safety. We analyse, therefore, a bivariate model for these two outcomes. In contrast to earlier research, we consider a model also having placebo effects to see the impact on the optimal design. We calculate D-optimal designs algebraically and numerically. We see that one more design point is necessary, but that otherwise, the optimal design has a similar structure compared to the model without placebo effects. We confirm that the drug's therapeutic index has a significant impact on the shape of the optimal design.

  • 28.
    Bjermo, Jonas
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Fackle Fornius, Ellinor
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Optimal Item Calibration in the Context of the Swedish Scholastic Aptitude TestIn: Article in journal (Other academic)
    Abstract [en]

    Large scale achievement tests require the existence of item banks with items for use in future tests. Before an item is included into the bank, it's characteristics need to be estimated. The process of estimating the item characteristics is called item calibration. For the quality of the future achievement tests, it is important to perform this calibration well and it is desirable to estimate the item characteristics as efficiently as possible. Methods of optimal design have been developed to allocate calibration items to examinees with the most suited ability. Theoretical evidence shows advantages with using ability-dependent allocation of calibration items. However, it is not clear whether these theoretical results hold also in a real testing situation. In this paper, we investigate the performance of an optimal ability-dependent allocation in the context of the Swedish Scholastic Aptitude Test (SweSAT) and quantify the gain from using the optimal allocation. On average over all items, we see an improved precision of calibration. While this average improvement is moderate, we are able to identify for what kind of items the method works well. This enables targeting specific item types for optimal calibration. We also discuss possibilities for improvements of the method.

  • 29.
    Bjermo, Jonas
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Fackle-Fornius, Ellinor
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Optimizing Calibration Designs with Uncertainty in AbilitiesManuscript (preprint) (Other academic)
    Abstract [en]

    In computerized adaptive tests, some newly developed items are often added for pretesting purposes. In this pretesting, item characteristics are estimated which is called calibration. It is promising to allocate calibration items to examinees based on their abilities and methods from optimal experimental design have been used for that. However, the abilities of the examinees have usually been assumed to be known for this allocation. In practice, the abilities are estimates based on a limited number of operational items. We develop the theory for handling the uncertainty in abilities in a proper way and show how optimal calibration design can be derived in this situation. The method has been implemented in an R package. We see that the derived optimal calibration designs are more robust if this uncertainty in abilities is acknowledged.

1 - 29 of 29
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf