State of the Art Natural Language Understanding at Scale – David Talby | ODSC West 2018
Natural language processing is a key component in many data science systems that must understand or reason about text. Common use cases include question answering, paraphrasing or summarization, sentiment analysis, natural language BI, language modeling, and disambiguation. Building such systems usually requires combining three types of software libraries: NLP annotation frameworks, machine learning frameworks, and deep learning frameworks.
This talk introduces the NLP library for Apache Spark. It natively extends the Spark ML pipeline API’s which enabling zero-copy, distributed, combined NLP & ML pipelines, which leverage all of Spark’s built-in optimizations. Benchmarks and design best practices for building NLP, ML, and DL pipelines on Spark will be shared. The library implements core NLP algorithms including lemmatization, part of speech tagging, dependency parsing, named entity recognition, spell checking, and sentiment detection. This video demonstrates using these algorithms to build commonly used pipelines, by using PySpark on notebooks.
- Senior Product Marketing Manager (AI/Data Science)Job Summary Are you an experienced product marketing manager looking for a leadership role in global campaigns for data science… Read more »
- Machine Learning and Deep Learning Product ManagerJob Summary Are you an experienced product manager looking for a leadership role in deep learning, AI, and cloud services?… Read more »