Natural language processing (NLP) is the ability of a computer to understand human languages. In both the academic and the industrial world, NLP has been widely used for different purposes such as Sentiment Analysis, Semantic
Text Similarity (STS), Text Translation, and Question Answering (QA), to cite a few.
With the advent of the Transformer model architectures like BERT, the performances of these tasks had huge improvements, by continuously reaching better accuracy levels. Despite these models requiring huge training
resources, their ability to learn the human language makes them astate-of-the-art solution for a plethora of NLP tasks.
A QA task is the ability of a deep learning model to extract an answer from a text (e.g.: a medical history of a patient), given an input question (e.g: is a patient suffering from this symptom?).
QA in clinical and medical notes of patients has gained a lot of attention in recent years. Extracting important features for the diagnoses, or helping the doctors to figure out symptoms and correlations between them, is in fact becoming essential to properly take care of a patient and to help doctors in
In this talk, we will have a brief introduction to how Transformers models can be used to extract answers to medical questions from clinical patient notes.
Moreover, we will analyze an example of how the Semantic Text Similarity can be used to search for different patients that have rare symptoms, and that can be related to the same diagnoses.