WP5 explores how well the anticipatory representations allow the recognition and preprocessing of events that have not been seen before over the training. Considering the missing information during the training, the aim is to investigate the effect of spatial and temporal dimensions of data over the inference and the assimilation of new knowledge.
Currently, neuroscientific insights represent a common source of inspiration for the advancements in machine learning. For example, in [5], Pervasive Internal Regression (PIR) is proposed as a solution that models the activations of the internal layer at a future timestep. The model is developed to mimic the prediction-error signals in biological brains. To develop qualitative analogical reasoning systems, it is important to understand how the human brain responds to different real-world stimuli. In [15], the effectiveness of fine-tuned and prompt-tuned supervision for learning neural representations is assessed under language guidance. Since most recent methods are developed for probing the brain’s language representation of Germanic languages, such as English, the study discussed in [14] focuses on the Chinese language. The inference of the temporal dimension of data is analysed from different perspectives as event timelines [12, 4, 1], explanation regeneration [6] or continual learning [3, 8, 9, 11, 13]. In the case of event timelines, the aim is to extract temporal information about different events mentioned in the text, while in the case of explanation regeneration, chains of facts are created for autoregressive reasoning in the question-answering problems.
By continual learning, we allow different systems to permanently refine and update their knowledge about the real world. In [3], a novel replay-based online continual learning method is proposed to prevent catastrophic forgetting. In [8], the effect of the sampling strategy usually used to implement the replay-based methods is analysed over the model performance. The study presents case studies for text classification and question-answering. In [9], a novel method called Entropy-based Stability-Plasticity (ESP) is proposed to address the stability-plasticity dilemma that prevents models from accumulating knowledge over lifelong learning. Given that current methods for continual learning tend to overestimate recently observed data, a new methodology that recognises and corrects the prediction bias is discussed in [11]. Further on, in [13], the problem of overestimating the effect of the recent observations is broadly addressed by rearranging the dataset in order to obtain continuously nonstationary data. Again, this work is particularly important, knowing that the current work dedicated to task-free continual learning assumes that the data is mostly stationary and only changed at a few distinct moments of time.
Unlike the previous works related to the temporal aspect of data, the inference of new knowledge can also refer to the spatial 3-D configurations. For example, the autonomous agent called Layout-aware Dreamer (LAD) introduced in [10] can navigate in a previously unseen environment and localize a remote object. The spatial reasoning in unseen environments is also explored in [7, 2] for controlling self-driven cars using natural language commands.
# | Year | Title | Authors | Venue | Description |
---|---|---|---|---|---|
1 | 2019 | A survey on temporal reasoning for temporal information extraction from text | Leeuwenberg, Artuur and Moens, Marie-Francine | JAIR 2019 | This article presents a comprehensive survey of the research from the past decades on temporal reasoning for automatic temporal information extraction from text, providing a case study on the integration of symbolic reasoning with machine learning-based information extraction systems. |
2 | 2020 | Giving Commands to a Self-driving Car: A Multimodal Reasoner for Visual Grounding | Deruyttere, Thierry and Collell, Guillem and Moens, Marie-Francine | A new spatial memory module and a spatial reasoner for the Visual Grounding task. We focus on integrating the regions of a Region Proposal Network into a new multi-step reasoning model. | |
3 | 2020 | Online Continual Learning from Imbalanced Data | Chrysakis, Aristotelis and Moens, Marie-Francine | Improves online continual learning performance in imbalanced settings by extending reservoir sampling. | |
4 | 2020 | Towards Extracting Absolute Event Timelines from English Clinical Reports | Leeuwenberg, Tuur and Moens, Marie-Francine | IEEE | An approach towards extraction of more complete temporal information for all events, and obtain probabilistic absolute event timelines by modeling temporal uncertainty with information bounds. |
5 | 2020 | Improving Language Understanding in Machines through Anticipation. | Cornille, Nathan and Collel, Guillem and Moens, Marie-Francine | NAISys 2020 | Poster that reflects on some of the issues with an internal contrastive objective that aims to improve representation learning. |
6 | 2020 | Autoregressive Reasoning over Chains of Facts with Transformers | Ruben Cartuyvels, Graham Spinks, Marie-Francine Moens | COLING 2020 | An iterative inference algorithm for multi-hop explanation regeneration, that retrieves relevant factual evidence in the form of text snippets, given a natural language question and its answer. |
7 | 2021 | Giving Commands to a Self-Driving Car: How to Deal with Uncertain Situations? | Deruyterre, Thierry and Milewski, Victor and Moens, Marie-Francine | When a command is given to a self-driving cars, this can cause ambiguous solutions. A method to solve this through visual and textual means is proposed. | |
8 | 2022 | How Relevant is Selective Memory Population in Lifelong Language Learning? | Vladimir Araujo, Helena Balabin, Julio Hurtado, Alvaro Soto, and Marie-Francine Moens | AACL-IJCNLP 2022 | by investigate relevance of selective memory population in the lifelong learning for language, methods that randomly store a uniform number of samples lead to high performances |
9 | 2022 | Entropy-based Stability-Plasticity for Lifelong Learning | Vladimir Araujo, Julio Hurtado, Alvaro Soto, and Marie-Francine Moens | CVPR 2022 | A novel method called Entropy-based Stability-Plasticity is introduced to address the stability-plasticity dilemma in neural networks. |
10 | 2023 | Layout-aware Dreamer for Embodied Visual Referring Expression Grounding | Li, Mingxiao and Wang, Zehao and Tuytelaars, Tinne and Moens, Marie-Francine | AAAI-23 | We have designed an autonomous agent called Layout-aware Dreamer (LAD) including two novel modules, the Layout Learner and the Goal Dreamer, to mimic a humans cognitive decision process |
11 | 2023 | Online Bias Correction for Task-Free Continual Learning | Aristotelis Chrysakis and Marie-Francine Moens | ICLR 2023 | We explain both theoretically and empirically how experience replay biases the outputs of the model towards recent stream observations. |
12 | 2023 | Implicit Temporal Reasoning for Evidence-Based Fact-Checking | Liesbeth Allein, Marlon Saelens, Ruben Cartuyvels, and Marie-Francine Moens | EACL 2023 | Shows that time positively influences the claim verification process of evidence-based fact-checking. |
13 | 2023 | Simulating Task-Free Continual Learning Streams From Existing Datasets | Aristotelis Chrysakis and Marie-Francine Moens | ||
14 | 2023 | Fine-tuned vs. Prompt-tuned Supervised Representations: Which Better Account for Brain Language Representations? | Jingyuan Sun and Marie-Francine Moens | IJCAI 2023 | Investiging various supervised method and the correlation to how brains represent language. |
15 | 2023 | Tuning In to Neural Encoding: Linking Human Brain and Artificial Supervised Representations of Language | Jingyuan Sun, Xiaohan Zhang and Marie-Francine Moens | ECAI 2023 | Linking human brain and supervised ANN representations of the Chinese language. |