Michael I. Jordan of Berkeley on Learning-Aware Mechanism Design Michael I. Jordan of Berkeley on Learning-Aware Mechanism Design
As newer fields emerge within data science and the research is still hard to grasp, sometimes it’s best to talk to... Michael I. Jordan of Berkeley on Learning-Aware Mechanism Design

As newer fields emerge within data science and the research is still hard to grasp, sometimes it’s best to talk to the experts and pioneers of the field. Recently, we spoke with Michael I. Jordan, PhD, Distinguished Professor, ACM/AAAI Allen Newell Award Laureate at the University of California, Berkeley, about learning-aware mechanism design. You can listen to the full Lightning Interview here, and read the transcript for an insightful answer with Michael I. Jordan, PhD below.

What are your thoughts on a business model for something like Chat GPT?

I think there will eventually be a business model there. When you think about business models, advertising and subscription-based just can’t be how the world runs – it just can’t be our future right as the main way to make these things useful to people. Companies are already doing this kind of market-style thinking.

I spent a day a week at Amazon, and they’ve been doing machine learning going back to the early 90s to find patterns and also make logistics decisions. One of the things that Amazon does is they have something called the FBA (Fulfillment by Amazon). If you produce some product somewhere, and you don’t want to do all of the inventory management and stockpiling yourself, or the shipping, you let Amazon do that for you. You send a bunch of your items to Amazon and they handle the rest.

Now you have various decisions to make, like how much inventory do you keep, when do you get rid of the inventory, what price do you set the products at, etc. Amazon doesn’t really know how to do that themselves since they don’t know the product that well. They can ask the producer how much their product is worth or make a prediction. But, the user or producer is not incentivized to tell Amazon because if they know they can get a cheaper price they’ll kind of hedge a little bit.

This is how all of us are.  If an insurance company is coming after all of these years, and wants to sell me health insurance, they’re going to set a price and ask questions like how much do you drink, and I’m going to say well I don’t drink at all, or they’ll ask how much I exercise, and so on.

Economists have thought about this a lot. These are the problems of information asymmetries and incentive structures. How can you set things up so that both sides actually the best out of a deal under these asymmetry conditions?

Whereas the kind of current machine learning style thinking that federated learning, the ChatGPT do, is they don’t consider these issues. They just say well, I can gather data however I want and I can go grab everybody’s data on the web, flow it back centrally to me, build my model, make it available, and hopefully I make money.

Well, that’s just such a limited perspective; that’s not how the world wants to run. The world wants to have agents who possess something of value. Maybe their own data is valuable. Maybe they don’t want to give it to Google. This isn’t just because they want to be paid, but also because there are competitive reasons or privacy reasons.

If you don’t take that stuff into account, you’re going to build stuff that just in some senses as a toy, or makes money for a small number of people like it did for Google in the search engine, but is missing out on all the really big opportunities.

More on Michael I. Jordan:

He received his Masters in Mathematics from Arizona State University, and earned his PhD in Cognitive Science in 1985 from the University of California, San Diego. His research interests bridge the computational, statistical, cognitive, biological, and social sciences. He received the Ulf Grenander Prize from the American Mathematical Society in 2021, the IEEE John von Neumann Medal in 2020, the IJCAI Research Excellence Award in 2016, the David E. Rumelhart Prize in 2015, and the ACM/AAAI Allen Newell Award in 2009. He gave the Inaugural IMS Grace Wahba Lecture in 2022, the IMS Neyman Lecture in 2011, and an IMS Medallion Lecture in 2004.

How to learn more about machine learning

By registering for ODSC East 2023 – now 60% off – you’ll be able to learn everything you need to know about machine learning. If you’re totally new to machine learning and data science, then consider getting an ODSC East Mini-Bootcamp pass. With this pass, you’ll be able to start your machine learning journey today with on-demand sessions on our Ai+ Training platform. We’ll also have a series of introductory sessions on AI literacy, intros to programming, etc. Here are a few other training sessions you can check out during the event:

  • An Introduction to Data Wrangling with SQL: Sheamus McGovern | CEO and ML Engineer | ODSC
  • Advanced Fraud Modeling & Anomaly Detection with Python & R: Aric LaBarr, PhD | Associate Professor of Analytics | Institute for Advanced Analytics at NC State University
  • Machine Learning with XGBoost: Matt Harrison | Python & Data Science Corporate Trainer, Consultant | MetaSnake
  • Introduction to Large-scale Analytics with PySpark: Akash Tandon | Co-Founder, Co-author, Advanced Analytics with PySpark | Looppanel, O’Reilly Media
  • Programming with Data: Python and Pandas: Daniel Gerlanc | Sr. Director – Data Science & ML Engineering | Ampersand
  • Beyond the Basics: Data Visualization in Python: Stefanie Molin | Software Engineer, Data Scientist, Chief Information Security Office, Author of Hands-On Data Analysis with Pandas | Bloomberg LP
  • Introduction to Machine Learning: Julia Lintern | Data Science Instructor | Metis


ODSC gathers the attendees, presenters, and companies that are shaping the present and future of data science and AI. ODSC hosts one of the largest gatherings of professional data scientists with major conferences in USA, Europe, and Asia.