Marie-Francine (Sien) Moens is full professor at the Department of Computer Science at KU Leuven, Belgium. She holds a M.Sc. and a Ph.D. degree in Computer Science from this university. She is the director of the Language Intelligence and Information Retrieval (LIIR) research lab, a member of the Human Computer Interaction group, and head of the Informatics section.
Her main direction of research is the development of novel methods for automated content recognition in text and multimedia using statistical machine learning and exploiting insights from linguistic and cognitive theories. She investigates topics such as:
She is holder of the ERC Advanced Grant CALCULUS (2018-2023) granted by the European Research Council. From 2012 till 2016 she was the coordinator of the MUSE project financed by Future and Emerging Technologies (FET) - Open of the European Commission. She is currently associate editor of the journal IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) and was a member of the editorial board of the journal Foundations and Trends® in Information Retrieval from 2014 till 2018. In 2011 and 2012 she was appointed as chair of the European Chapter of the Association for Computational Linguistics (EACL) and was a member of the executive board of the Association for Computational Linguistics (ACL). From 2010 until 2014 she was a member of the Research Council of KU Leuven and from 2014 until 2018 she was a member of the Council of the Industrial Research Fund of KU Leuven. From 2014 till 2018 she was the scientific manager of the EU COST action iV&L Net (The European Network on Integrating Vision and Language).
Justifying diagnosis decisions by deep neural networks. Journal of Biomedical Informatics , 96. doi: https://doi.org/10.1016/j.jbi.2019.103248
A survey on temporal reasoning for temporal information extraction from text. Journal of Artificial Intelligence Research , 66. doi: https://doi.org/10.1613/jair.1.11727
Improving Natural Language Understanding through Anticipation-Enriched Representations. . Modern trends in cognitive architectures and systems: From theory to implementation in natural and artificial agents
Giving Commands to a Self-driving Car: A Multimodal Reasoner for Visual Grounding. AAAI 2020 Reasoning for Complex Question Answering (RCQA) Workshop
A Survey on Temporal Reasoning for Temporal Information Extraction from Text (Extended Abstract). IJCAI-PRICAI YOKOHAMA 2020 doi: https://doi.org/10.24963/ijcai.2020/712
Online Continual Learning from Imbalanced Data. ICML 2020 , 119.
Structured (De) composable Representations Trained with Neural Networks. Artificial Neural Networks in Pattern Recognition. ANNPR 2020. , 12294. doi: https://doi.org/10.1007/978-3-030-58309-5_3
Towards Extracting Absolute Event Timelines from English Clinical Reports. IEEE/ACM Transactions on Audio, Speech, and Language Processing , 28. doi: 10.1109/TASLP.2020.3027201
Structured (De)composable Representations Trained with Neural Networks. Computers doi: https://doi.org/10.3390/computers9040079
Learning Grammar in Confined Worlds. Lecture Notes in Electrical Engineering book series (LNEE) , 704. doi: https://doi.org/10.1007/978-981-15-8395-7_27
Improving Language Understanding in Machines through Anticipation.. From Neuroscience to Artificially Intelligent Systems: From theory to implementation in natural and artificial agents
Decoding Language Spatial Relations to 2D Spatial Arrangements. EMNLP 2020 doi: 10.18653/v1/2020.findings-emnlp.408
Convolutional Generation of Textured 3D Meshes.. NeurIPS 2020
Autoregressive Reasoning over Chains of Facts with Transformers. Proceedings of the 28th International Conference on Computational Linguistics doi: 10.18653/v1/2020.coling-main.610
Are Scene Graphs Good Enough to Improve Image Captioning?. AACL-IJCNLP 2020
Discrete and continuous representations and processing in deep learning: Looking forward. AI Open , 2. doi: 10.1016/j.aiopen.2021.07.002
How Do Simple Transformations of Text and Image Features Impact Cosine-based Semantic Match. Advances in Information Retrieval doi: https://doi.org/10.1007/978-3-030-72113-8_7
Giving Commands to a Self-Driving Car: How to Deal with Uncertain Situations?. Engineering Applications of Artificial Intelligence
Augmenting BERT-style Models with Predictive Coding to Improve Discourse-level Representations. EMNLP 2021
A Brief Overview of Universal Sentence Representation Methods: A Linguistic View.. ACM Computing Surveys , 55 issue 1.
How Relevant is Selective Memory Population in Lifelong Language Learning?. , Volume 2: Short Papers.
Critical Analysis of Deconfounded Pretraining to Improve Visio-Linguistic Models. Frontiers in Artificial Intelligence , 5. doi: 10.3389/frai.2022.736791
Entropy-based Stability-Plasticity for Lifelong Learning. , Volume 2: Short Papers.
Finding Structural Knowledge in Multimodal-BERT. 60th Annual Meeting of the Association for Computational Linguistics
Evaluation Benchmarks for Spanish Sentence Representations. Thirteenth Language Resources and Evaluation Conference
Learning Sentence-Level Representations with Predictive Coding. Machine Learning and Knowledge Extraction , 1. doi: 10.3390/make5010005
Layout-aware Dreamer for Embodied Visual Referring Expression Grounding. Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI-23)
Online Bias Correction for Task-Free Continual Learning. ICLR 2023
Implicit Temporal Reasoning for Evidence-Based Fact-Checking. The 17th Conference of the European Chapter of the Association for Computational Linguistics , Findings of the Association for Computational Linguistics: EACL 2023.
A Memory Model for Question Answering from Streaming Data Supported by Rehearsal and Anticipation of Coreference Information. Findings of the 61st Annual Meeting of the Association for Computational Linguistics , Findings of the Association for Computational Linguistics: ACL 2023. doi: 10.18653/v1/2023.findings-acl.830
Simulating Task-Free Continual Learning Streams From Existing Datasets . CLVision @ CVPR2023
Fine-tuned vs. Prompt-tuned Supervised Representations: Which Better Account for Brain Language Representations?. International Joint Conference on Artificial Intelligence
What Can We Learn from the Structures Found in Visual and Language Data and their Correlations?. Modelling and Representing Context. Fourteenth International Workshop on Human-Centric and Contextual Systems
Investigating Neural Fit Approaches for Sentence Embedding Model Paradigms. 26th European Conference on Artificial Intelligence - ECAI 2023
Tuning In to Neural Encoding: Linking Human Brain and Artificial Supervised Representations of Language. 26th European Conference on Artificial Intelligence - ECAI 2023
Causal Factor Disentanglement for Few-Shot Domain Adaptation in Video Prediction. Entropy , 5. doi: 10.3390/e25111554
Contrast, Attend and Diffuse to Decode High-Resolution Images from Brain Activities. Thirty-seventh Conference on Neural Information Processing Systems