The main aim of my talk at ODSC West will be to enable the attendees to assimilate the key concepts in the area, and position graph representation learning in a proper context with related fields, making graph representation learning easy to navigate, leverage, and contribute to. This will closely follow a survey paper I recently published in Current Opinion in Structured Biology.
Specifically, my talk will present a vibrant and exciting area of deep learning research: graph representation learning. Or, put simply, building machine learning models over data that lives on graphs (interconnected structures of nodes connected by edges). These models are commonly known as graph neural networks, or GNNs for short. There is very good reason to study data on graphs, as, in many ways, graphs are the main modality of data we receive from nature. From the molecule (a graph of atoms connected by chemical bonds) all the way to the connectomic structure of the brain (a graph of neurons connected by synapses), graphs are a universal language for describing living organisms, at all levels of organisation. Similarly, most relevant artificial constructs of interest to humans, from the transportation network (a graph of intersections connected by roads) to the social network (a graph of users connected by friendship links), are best reasoned about in terms of graphs.
This potential has been realized in recent years by both scientific and industrial groups, with GNNs now being used to discover novel potent antibiotics, serve estimated travel times in Google Maps, power content recommendations in Pinterest and product recommendations in Amazon, and design the latest generation of machine learning hardware: the TPUv5. Further, GNN-based systems have helped mathematicians uncover the hidden structure of mathematical objects, leading to new top-tier conjectures in the area of representation theory. It would not be an understatement to say that billions of people are coming into contact with predictions of a GNN, on a day-to-day basis. As such, it is likely a valuable pursuit to study GNNs, even without aiming to directly contribute to their development.
Beyond this, it is likely that the very cognition processes driving our reasoning and decision-making are, in some sense, graph-structured. That is, paraphrasing a quote from Jay Wright Forrester, “nobody really imagines in their head all the information known to them; rather, they imagine only selected concepts, and relationships between them, and use those to represent the real system”. If we subscribe to this interpretation of cognition, it is quite unlikely that we will be able to build a generally intelligent system without some component relying on graph representation learning. Note that this finding does not clash with the fact that many recent skillful ML systems are based on the Transformer architecture—as we will uncover in this talk, Transformers are themselves a special case of GNNs.
About the author/ODSC West speaker:
Petar Veličković is a Staff Research Scientist at Google DeepMind, Affiliated Lecturer at the University of Cambridge, and an Associate of Clare Hall, Cambridge. Petar holds a PhD in Computer Science from the University of Cambridge (Trinity College), obtained under the supervision of Pietro Liò. His research concerns geometric deep learning—devising neural network architectures that respect the invariances and symmetries in data (a topic I’ve co-written a proto-book about). Petar’s research has been used in substantially improving travel-time predictions in Google Maps, and guiding intuition of mathematicians towards new top-tier theorems and conjectures.