4 Reasons Why Declarative ML Makes Sense for Engineers 4 Reasons Why Declarative ML Makes Sense for Engineers
Machine learning is starting to go mainstream, graduating out of the research lab and making its way into products. In fact,... 4 Reasons Why Declarative ML Makes Sense for Engineers

Machine learning is starting to go mainstream, graduating out of the research lab and making its way into products. In fact, every engineering team we’ve worked on has had an item on their roadmap that went something like “Improve [cool feature] with machine learning”.

But “doing machine learning” is not your typical engineering task. There are countless blockers that may keep you from getting your ML projects off the ground, including:

  1. The time required to understand and stitch together a fragmented ecosystem of low-level, ML-specific packages.
  2. The need for data science expertise to implement modeling strategies that produce useful results.
  3. The engineering effort involved with building and maintaining infrastructure that can execute large-scale ML workloads.
  4. The constant investment required to stay up-to-date on the latest industry standards in machine learning architectures and training strategies. 

Given these points, it’s no surprise that nearly 90% of ML projects never make it to production

We fundamentally believe that deep technologies hit an inflection point when they become readily available for the curious but generalist developer. Consider data engineering as an example. Armed with the right level of abstraction and some good documentation, we’ve seen engineers leverage technologies like DBT to do in a few afternoons what took teams of specialized experts beforehand.Now, we believe it’s ML’s turn.

What is declarative machine learning?

Declarative machine learning systems were created to power the internal machine learning platforms at leading tech companies like Ludwig at Uber, Overton at Apple and Looper at Meta. The motivation for these technologies was simple: empower software engineers without deep backgrounds in ML to train, serve and monitor a large number of AI models. Today these platforms host thousands of models and process billions of inferences.

The key idea behind declarative ML is to abstract your entire model pipeline behind a simple YAML configuration, allowing a developer to specify what they want and let the system deliver the how. These systems combine the best of simplicity and flexibility: allowing someone to get started in just a few lines, but then expand as their needs get more sophisticated.

In this blog post, we’re going to cover Ludwig, the leading open-source declarative ML framework, and the four reasons why this approach makes sense for every engineer interested in machine learning. 

1 – “Don’t reinvent the wheel”

Getting started with a declarative framework is easy because it comes with most of the components you’d need out-of-the-box. As a user, all you need to do is specify the inputs and output features for your model to get started. Even the simplest configuration in Ludwig already sets up your entire ML pipeline from feature preprocessing, to encoding, training, decoding and postprocessing.

So just eight lines of config:

    - name: description
      type: text
    - name: profile_image_url
      type: image
    - name: account_type
      type: category

Generates a fully functional, and state-of-the-art ML pipeline. You’ll never have to implement feature normalization, text tokenization, pixel scaling, a transformer, a convolutional neural net or anything else in the data science toolkit yourself.

This allows us to adhere to a sacred software principle: don’t reinvent the wheel; instead, stand on the shoulders of giants.

2 – Control What You Want; Automate the Rest

Simplicity is great for a getting started experience, but when we oversimplify technology it often ends up being more of a toy than a production-ready application.  The most common critique we hear of the last attempt to democratize machine learning via AutoML (automated machine learning) was that it was useful for prototyping but never used for serious production applications.

We believe that’s for two reasons:

  1. Most ML applications tend to be iterative, where you start with a first model that is most of the way there and you hill-climb to a successively better solution by turning the knobs.
  2. Engineers want agency and control over their tools – getting something automated is great, as long as you aren’t a prisoner to those choices.

Ludwig solves these problems by having smart defaults that are all customizable through a unified design pattern. Want to lowercase all your text data to standardize it, use a large pretrained language model like BERT instead of a standard neural network, and add some more regularization? Each customization is just one extra line to the config.

    - name: description
      type: text
          lowercase: true
          type: bert
          use_pretrained: true
          trainable: true
    - name: account_type
      type: category
    regularization_lambda: 0.1

3 – No more infrastructure headaches

Machine learning infrastructure for model training at scale has evolved rapidly over the years. Distributed GPU training, data loading and sharding, model compilation, mixed precision training, and countless other strategies have been developed to squeeze efficiency and performance out of your models. These optimizations are tricky to implement, essential to get right, but ultimately boilerplate that can be generically applied to many machine learning pipelines. 

Your time is too precious to be fighting CUDA errors all day. By default, Ludwig incorporates the above optimizations and more into its modeling pipeline and abstracts them away so that you can focus on the machine learning task at hand. built to be natively compatible with best-of-breed frameworks, including Ray, a unified compute framework to scale python workloads from some of the original creators of Spark. However, just like everything else in Ludwig, any of these optimizations can be configured declaratively:

  type: ray
    type: dask
    strategy: horovod
    use_gpu: true
    num_workers: 16

(Obligatory complex distributed infrastructure graphic)

Write once, scale effortlessly, never rewrite or repeat.

4 – Build on open ecosystems

Lastly, we’re strong believers that the best developer tools are built in the open so that developers can have full visibility of the code that’s running in their system, make any changes they need, and collaborate as a community to improve them. Ludwig has been in the open source since 2019, more than 130 people contribute to the project, and it’s already helping many companies build and deploy ML models for use cases ranging from content moderation to computer vision to personalization.

The community is constantly improving Ludwig to stay on top of the latest ML trends with contributions on a daily basis. But if you see a component you’d like to use that isn’t available out-of-the-box, Ludwig is extensible and allows you to easily add your own module by implementing an abstract interface. The developer guide provides instructions on how you can add your own custom models, metrics, preprocessing and more.

Get Started with Declarative Machine Learning

A decade from now, machine learning will be ubiquitous powering applications large and small. In the process, we’ll need to put the building blocks in the hands of every engineer so that they can get started with ease and scale with their experience. To make that a reality, we’re taking the key principles behind what empowered engineers at leading tech companies and building it in the open source.

If you’re interested in learning more about how or why declarative machine learning just works for an engineer join us for our live webinar next Tuesday hosted by Open Data Science. You can also join the open-source Ludwig community and download one of many use case tutorials.

We also invite you to request a custom demo of our enterprise-ready platform called Predibase which builds on the declarative approach with a user-friendly UI, model repositories and tracking, managed cloud service, and much more, making it the fastest way to go from data to deployment. Here’s a sneak peek of the platform in action:

Until next time, happy building!

Article by Geoffrey Angus and Devvret Rishi of Predibase

ODSC Community

The Open Data Science community is passionate and diverse, and we always welcome contributions from data science professionals! All of the articles under this profile are from our community, with individual authors mentioned in the text itself.