Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Assessing, Testing and Estimating the Amount of Fine-Tuning by Means of Active Information
Stockholm University, Faculty of Science, Department of Mathematics.ORCID iD: 0000-0003-2767-8818
Number of Authors: 22022 (English)In: Entropy, E-ISSN 1099-4300, Vol. 24, no 10, article id 1323Article in journal (Refereed) Published
Abstract [en]

A general framework is introduced to estimate how much external information has been infused into a search algorithm, the so-called active information. This is rephrased as a test of fine-tuning, where tuning corresponds to the amount of pre-specified knowledge that the algorithm makes use of in order to reach a certain target. A function f quantifies specificity for each possible outcome x of a search, so that the target of the algorithm is a set of highly specified states, whereas fine-tuning occurs if it is much more likely for the algorithm to reach the target as intended than by chance. The distribution of a random outcome X of the algorithm involves a parameter θ that quantifies how much background information has been infused. A simple choice of this parameter is to use θf in order to exponentially tilt the distribution of the outcome of the search algorithm under the null distribution of no tuning, so that an exponential family of distributions is obtained. Such algorithms are obtained by iterating a Metropolis–Hastings type of Markov chain, which makes it possible to compute their active information under the equilibrium and non-equilibrium of the Markov chain, with or without stopping when the targeted set of fine-tuned states has been reached. Other choices of tuning parameters θ are discussed as well. Nonparametric and parametric estimators of active information and tests of fine-tuning are developed when repeated and independent outcomes of the algorithm are available. The theory is illustrated with examples from cosmology, student learning, reinforcement learning, a Moran type model of population genetics, and evolutionary programming.

Place, publisher, year, edition, pages
2022. Vol. 24, no 10, article id 1323
Keywords [en]
active information, exponential tilting, fine-tuning, functional information, large deviations, Markov chains, Metropolis-Hastings, Moran model, statistical estimation and testing
National Category
Mathematics
Identifiers
URN: urn:nbn:se:su:diva-211098DOI: 10.3390/e24101323ISI: 000872415400001Scopus ID: 2-s2.0-85140643541OAI: oai:DiVA.org:su-211098DiVA, id: diva2:1709795
Available from: 2022-11-09 Created: 2022-11-09 Last updated: 2023-03-28Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Hössjer, Ola

Search in DiVA

By author/editor
Hössjer, Ola
By organisation
Department of Mathematics
In the same journal
Entropy
Mathematics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 16 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf