Our postdoc Milan Papež and his colleagues succeeded with research on incorporating graph input into sum-product networks. This method can help in AI applications that require tractable probabilistic inference, such as generating new molecules conditionally on existing ones.
Graphs are powerful tools for representing objects and their interactions in real-world applications like developing personalized diagnostic strategies, estimating the time of arrival or discovering new materials. Despite their usefulness, modeling the probability distribution of graphs is a complex task due to their highly combinatorial nature – the fact that there are many different ways to connect nodes (the objects) with edges (the interactions between them).
Adressing intractability with sum-product networks
In recent years, deep generative models have advanced the ability to tackle this complexity. However, they come with a significant limitation: they are intractable. This means that the models struggle to provide exact answers to certain probabilistic queries, often requiring approximations, especially when trying to determine the marginal probability of specific parts of a graph. When we want to know the marginal probability quickly, traditional deep generative models prove difficult to use.
Sum-product networks (SPNs) offer a promising alternative. They are structured into layers made up of two types of operations: sums and products. This unique structure allows SPNs to break down complicated problems into smaller, manageable parts, making it easier to compute marginal probabilities. One issue still remains – SPNs have traditionally been used with standard data formats like vectors, not graphs.
Novel research by Milan Papež and his colleagues from our center extends SPNs to handle graph data as well. In their paper, they introduce a new class of models called GraphSPNs and explore different methods to ensure that GraphSPNs remain order-agnostic (nodes can be put in any order). The found that determining a canonical representation of a graph before inputting it into the model works best. In applications like molecular discovery, GraphSPNs not only hold their own against intractable models but also offer unique advantages, such as the ability to generate new molecules based on a known one – something intractable models struggle to do.

Read the awarded paper
Interested to learn more about the novel approach? Check out the article titled GraphSPNs: Sum-Product Networks Benefit From Canonical Orderings that received Best Paper Honorable Mention at the 7th Workshop on Tractable Probabilistic Modeling (TPM). The workshop is organized within the Conference on Uncertainty in Artificial Intelligence (UAI), one of the top conferences on knowledge representation, learning, and reasoning in the presence of uncertainty.