Editor’s note: Avi Pfeffer is a speaker for ODSC East 2023 this May 9th-11th. Be sure to check out his talk, “Hybrid AI for Complex Applications with Scruff,” there!
In this ODSC East preview, the author describes how the Scruff AI modeling framework enables clear and coherent implementation of multiparadigm AI models.
Many different modeling paradigms are needed to build real-world AI applications. You might need a deep neural network to process image data, combined with a probabilistic model of contextual information, and perhaps a physics-based model to describe how the environment operates dynamically. Not only is combining all these different paradigms usually difficult from an engineering point of view, but it’s quite likely that the resulting combination will be incoherent. For example, imagine a fire modeling application that considers weather conditions in both regional and property-level fire ignition and spread. If you don’t correctly account for the fact that the same weather applies in both cases, you might underestimate the risk of severe weather causing significant damage to your property.
In this workshop, I’ll describe Scruff, our modeling framework that enables clear and coherent implementation of multiparadigm AI models. Scruff, which is an open-source package in Julia, is based on the ideas of probabilistic programming. Like probabilistic programming, a model is organized as a sequence of random variables, where each variable is generated from previous variables according to a random process, which is called a stochastic function or sfunc. However, unlike previous probabilistic programming languages, sfuncs do not need to be specified in explicit probabilistic terms. Instead, Scruff defines a set of operators that can be applied to sfuncs, and algorithms are written using these operators, making algorithm implementation independent of sfunc representation. Furthermore, different kinds of sfunc will support different operators, which enables them to serve different roles in a model. For example, a neural network might be used as a recognizer, while an ODE-based physics model might be used as a simulator. The algorithms know how to make use of components playing these different roles.
One of the appealing features of Scruff is its flexible treatment of time in dynamic models. Most existing AI frameworks model time as proceeding in fixed, discrete time steps. In contrast, Scruff allows asynchronous modeling, where variables are only instantiated when you need them. One benefit of this is that you can postpone running an algorithm until an event calls for it. Another benefit is that you can model fast and slowly evolving variables at their appropriate rates.
In the workshop, I’m going to describe a wildfire risk assessment application and show how it works at multiple temporal and spatial scales, integrating data-driven, physics-based, and probabilistic components. I’ll also explain how it avoids the mistake I described earlier in not accounting correctly for the weather. After presenting this example, I’ll help you get started with Scruff and walk you through some simple, hands-on examples.
About the authors:
Dr. Avi Pfeffer is Chief Scientist at Charles River Analytics. Dr. Pfeffer is a leading researcher on a variety of computational intelligence techniques including probabilistic reasoning, machine learning, and computational game theory. Dr. Pfeffer has developed numerous innovative probabilistic representation and reasoning frameworks, such as probabilistic programming, which enables the development of probabilistic models using the full power of programming languages, and statistical relational learning, which provides the ability to combine probabilistic and relational reasoning. He is the lead developer of Charles River Analytics’ Figaro™ probabilistic programming language. As an Associate Professor at Harvard, he developed IBAL, the first general-purpose probabilistic programming language. While at Harvard, he also produced systems for representing, reasoning about, and learning the beliefs, preferences, and decision making strategies of people in strategic situations. Prior to joining Harvard, he invented object-oriented Bayesian networks and probabilistic relational models, which form the foundation of the field of statistical relational learning. Dr. Pfeffer serves as Action Editor of the Journal of Machine Learning Research and served as Associate Editor of Artificial Intelligence Journal and as Program Chair of the Conference on Uncertainty in Artificial Intelligence. He has published many journal and conference articles and is the author of a text on probabilistic programming. Dr. Pfeffer received his Ph.D. in computer science from Stanford University and his B.A. in computer science from the University of California, Berkeley.