Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Deterministic limit of temporal difference reinforcement learning for stochastic games
Stockholm University, Faculty of Science, Stockholm Resilience Centre. Potsdam Institute for Climate Impact Research, Germany.
Number of Authors: 32019 (English)In: Physical review. E, ISSN 2470-0045, E-ISSN 2470-0053, Vol. 99, no 4, article id 043305Article in journal (Refereed) Published
Abstract [en]

Reinforcement learning in multiagent systems has been studied in the fields of economic game theory, artificial intelligence, and statistical physics by developing an analytical understanding of the learning dynamics (often in relation to the replicator dynamics of evolutionary game theory). However, the majority of these analytical studies focuses on repeated normal form games, which only have a single environmental state. Environmental dynamics, i.e., changes in the state of an environment affecting the agents' payoffs has received less attention, lacking a universal method to obtain deterministic equations from established multistate reinforcement learning algorithms. In this work we present a methodological extension, separating the interaction from the adaptation timescale, to derive the deterministic limit of a general class of reinforcement learning algorithms, called temporal difference learning. This form of learning is equipped to function in more realistic multistate environments by using the estimated value of future environmental states to adapt the agent's behavior. We demonstrate the potential of our method with the three well-established learning algorithms Q learning, SARSA learning, and actor-critic learning. Illustrations of their dynamics on two multiagent, multistate environments reveal a wide range of different dynamical regimes, such as convergence to fixed points, limit cycles, and even deterministic chaos.

Place, publisher, year, edition, pages
2019. Vol. 99, no 4, article id 043305
National Category
Physical Sciences Mathematics
Identifiers
URN: urn:nbn:se:su:diva-168337DOI: 10.1103/PhysRevE.99.043305ISI: 000464747500007OAI: oai:DiVA.org:su-168337DiVA, id: diva2:1315369
Available from: 2019-05-13 Created: 2019-05-13 Last updated: 2019-05-13Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Search in DiVA

By author/editor
Donges, Jonathan F.
By organisation
Stockholm Resilience Centre
In the same journal
Physical review. E
Physical SciencesMathematics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 2 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf