Access the full text.
Sign up today, get DeepDyve free for 14 days.
Deep reinforcement learning (DRL) is advantageous, but it rarely performs well when tested on real-world decision-making tasks, particularly those involving irregular time series with sparse actions. Although irregular time series with sparse actions can be handled using temporal abstractions for the agent to grasp high-level states, they aggravate temporal irregularities by increasing the range of time intervals essential to represent a state and estimate expected returns. In this work, we propose a general Time-aware DRL framework with Multi-Temporal Abstraction (T-MTA) that incorporates the awareness of time intervals from two aspects: temporal discounting and temporal abstraction. For the former, we propose a Time-aware DRL method, whereas for the latter we propose a Multi-Temporal Abstraction mechanism. T-MTA was tested in three standard RL testbeds and two real-life tasks (control of nuclear reactors and prevention of septic shock), which represent four common contexts of learning environments, online and offline, as well as fully and partially observable. As T-MTA is a general framework, it can be combined with any model-free DRL method. In this work, we examined two in particular: the Deep Q-Network approach and its variants, and Truly Proximal Policy Optimization. Our results show that T-MTA significantly outperforms competing baseline frameworks, including a standalone Time-aware DRL framework, MTAs, and the original DRL methods without considering either type of temporal aspect, especially when partially observable environments are involved and the range of time intervals is large.
Applied Intelligence – Springer Journals
Published: Mar 25, 2023
Keywords: Time-aware; Temporal abstraction; Deep reinforcement learning; Irregular time series; RL for real-world applications; Nuclear reactor control; Sepsis treatment
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.