fbpx
LLMOps – The Next Frontier of MLOps LLMOps – The Next Frontier of MLOps
Recently Sahar Dolev-Blitental, VP of Marketing at Iguazio, joined us for a lightning interview on LLMOps, and the next frontier of... LLMOps – The Next Frontier of MLOps

Recently Sahar Dolev-Blitental, VP of Marketing at Iguazio, joined us for a lightning interview on LLMOps, and the next frontier of MLOps. Over the course of nearly an hour, Saha discussed many facets of the recently emerged field of LLMOps, from the definition of the field to use cases and best practices. Keep reading for key takeaways from the interview, or you can watch the full video here.

What Are LLMOps?

“The rapid pace of [generative AI] and the fact that everyone is talking about it, makes MLOps and LLMOps much more important than ever.”

Large Language Models present their own range of challenges and complexities. As Sahar notes, the scale of LLMs requires more GPUs and presents different risks. There is also a stronger focus on efficiency to offset the increased amount of resources required by LLMs. Nevertheless, Sahar explains, the foundations of MLOps and LLMOps are the same, what separates them is the scale of the models being taken through their lifecycle to deployment. 

Use Cases of LLMOps

“Only 2% of the applications today are Gen AI. I mean 90% of the conversation is about Gen AI for sure, but in practice, only about 2% of the applications are Gen AI-driven. So, I think it’s still very early days….”  

Although the field is still in its infancy, LLMOps are being used to shepherd generative AI applications into production. During the interview, Sahar explored two use cases: Subject Matter Experts and Call Center Analysis.

Subject Matter Experts are often employed in the fields of healthcare and retail and take the form of chatbots that are experts on a designated topic. For example, you might find them embedded on the website to help customers directly, or in a support role for customer success teams. 

In the case of call center analysis, these applications can be used for sentiment analysis to dig deeper into the topic discussed and identify employees who need more support. In both of these cases, these applications are being used to help employees do their jobs better and increase satisfaction. 

Best Practices

“The number 1 kind of tip is that you don’t need to build your own LLM.” 

The last topic we will touch on is best practices for smaller organizations looking to implement LLMs and for minimizing bias in the models.

For smaller organizations with cost concerns, Sahar recommends looking into existing LLMs, rather than building your own from scratch. Doing so can reduce the cost of training. Secondly, she suggests that you keep the scope for your LLM use case very narrow. This prevents the LLM from wasting resources on work that does not create value. 

To avoid bias, Sarah highlights two very important areas. First, data prep is essential. If data is biased, output will be biased. There are several ways to avoid a biased data set:

  • Build a diverse team that represents a wide range of different backgrounds
  • Provide a diverse data set at the start
  • Constant monitoring and a commitment to retraining when bias is found.

Conclusion

To learn even more about LLMs and LLMOps be sure to join us at ODSC West from October 30th to November 2nd. With a full track devoted to NLP and LLMs, you’ll enjoy talks, sessions, events, and more that squarely focus on this fast-paced field.

Confirmed LLM sessions include:

  • Personalizing LLMs with a Feature Store
  • Evaluation Techniques for Large Language Models
  • Building an Expert Question/Answer Bot with Open Source Tools and LLMs
  • Understanding the Landscape of Large Models
  • Democratizing Fine-tuning of Open-Source Large Models with Joint Systems Optimization
  • Building LLM-powered Knowledge Workers over Your Data with LlamaIndex
  • General and Efficient Self-supervised Learning with data2vec
  • Towards Explainable and Language-Agnostic LLMs
  • Fine-tuning LLMs on Slack Messages
  • Beyond Demos and Prototypes: How to Build Production-Ready Applications Using Open-Source LLMs
  • Adopting Language Models Requires Risk Management — This is How
  • Connecting Large Language Models – Common Pitfalls & Challenges
  • A Background to LLMs and Intro to PaLM 2: A Smaller, Faster and More Capable LLM
  • The English SDK for Apache Spark™
  • Integrating Language Models for Automating Feature Engineering Ideation
  • How to Deliver Contextually Accurate LLMs
  • Retrieval Augmented Generation (RAG) 101: Building an Open-Source “ChatGPT for Your Data” with Llama 2, LangChain, and Pinecone
  • Building Using Llama 2
  • LLM Best Practises: Training, Fine-Tuning, and Cutting Edge Tricks from Research
  • Hands-On AI Risk Management: Utilizing the NIST A
ODSC Team

ODSC Team

ODSC gathers the attendees, presenters, and companies that are shaping the present and future of data science and AI. ODSC hosts one of the largest gatherings of professional data scientists with major conferences in USA, Europe, and Asia.

1