Venue
- AIxIA 2021
Date
- 2021
Exploration-Intensive Distractors: Two Environment Proposals and a Benchmarking
Jim Martin Catacora Ocaña*
Daniele Nardi*
* External authors
AIxIA 2021
2021
Abstract
Sparse-reward environments are famously challenging for deep reinforcement learning (DRL) algorithms. Yet, the prospect of solving intrinsically sparse tasks in an end-to-end fashion without any extra reward engineering is highly appealing. Such aspiration has recently led to the development of numerous DRL algorithms able to handle sparse-reward environments to some extent. Some methods have even gone one step further and have tackled sparse tasks involving different kinds of distractors (e.g. broken TV, self-moving phantom objects and many more). In this work, we put forward two motivating new sparse-reward environments containing the so-far largely overlooked class of exploration-intensive distractors. Furthermore, we conduct a benchmarking which reveals that state-of-the-art algorithms are not yet all-around suitable for solving the proposed environments.
Related Publications
Compositional Explanations is a method for identifying logical formulas of concepts that approximate the neurons' behavior. However, these explanations are linked to the small spectrum of neuron activations used to check the alignment (i.e., the highest ones), thus lacking c…
Two of the most impressive features of biological neural networks are their high energy efficiency and their ability to continuously adapt to varying inputs. On the contrary, the amount of power required to train top-performing deep learning models rises as they become more …
Molecular property prediction is a fundamental task in the field of drug discovery. Several works use graph neural networks to leverage molecular graph representations. Although they have been successfully applied in a variety of applications, their decision process is not t…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.