fbpx
What Happens When You Run SRL Experiments Over a BERT Based Model? 
Transformers have made more progress in the past few years than NLP in the past generation. Standard NLU approaches first learn syntactical and lexical features to explain the structure of a sentence. The former NLP models would be trained to understand the basic syntax of language... Read more
10 Notable Frameworks for NLP
Natural Language Processing hit its big stride back in 2017 with the introduction of Transformer Architecture from Google. State of the art approaches helped bridge the gap between humans and machines and helped us build bots capable of using human language undetected. It’s an exciting time.... Read more
State-of-the-Art NLP Made Easy With AdaptNLP
Natural Language Processing (NLP) has advanced significantly since 2018, when ULMFiT and Google’s release of the BERT language model approached human-level performance on a range of use cases. Since then, several models with similarly interesting names such as XLM, GPT-2, XLNet, and ALBERT have been released... Read more
What do Data Scientists and Decision Makers Need to Know About Google’s BERT
Any data scientist will tell you that one of the most challenging parts of natural language processing projects is the lack (or shortage) of training data. With deep learning, this has been semi-solved, but now the problem can be too much data—up to millions or even... Read more
Identifying Heart Disease Risk Factors from Clinical Text
Editor’s Note: See Sudha’s talk “Identifying Heart Disease Risk Factors from Clinical Notes” at ODSC Europe 2019.  People needlessly die every single day due to preventable heart attacks. The clues are hiding right within the notes doctors and clinicians take during routine health care visits. In... Read more