Probabilistic Foundations of Statistical Network Analysis discusses foundational aspects of modern statistical network analysis. The following table of contents provides a chapter-by-chapter synopsis of the book. Updates and errata will be posted here.

Chapter 1: Orientation

Abstract

Although 'network data' arises in a wide variety of applications and introduces a number of new challenges to the development of statistical theory and methods, the field of `statistical network analysis' remains mostly focused on a limited number of inference problems which have been adapted from classical statistics. Topics such as community detection, asymptotic analysis, and minimax estimation for networks generated from stochastic blockmodels, exponential random graph models, and graphon models are readily studied by extending well established ideas in clustering, large sample theory, and nonparametric regression, but they tend to overlook the big picture and broader implications which motivate the study of networks in the first place. In this opening chapter, I discuss the limitations of the conventional `networks-as-graphs' perspective of network analysis that has been adopted throughout much of the current literature. Throughout this book I emphasize network analysis not merely as a new discipline within statistics, on par with high-dimensional statistics and machine learning, but rather as a new statistical paradigm for complex data analysis which ought to be viewed in parallel to classical statistical theory. As modern data grows more complex, not in terms of size but rather in terms of dependence and heterogeneity, the usefulness of classical statistical thinking and methods for modern problems becomes marginalized. Rather than strain the classical theory of sampling and random graphs to treat these problems, I argue in favor of developing new theory and methods which better address the complexity and structure of modern problems. Such a theory does not yet exist. In writing this book, I hope to convince the reader of the need for such a theory, and to provide some ideas for how to make progress in a new direction.

Twitter thread:

Slides for Chapter 1

Sample Chapter

Chapter 2: Binary Relational Data

Abstract

Though network science is seen as a modern field, the study of network-type data goes back at least to the early 1900s, with the initial sociometric studies of Moreno and other quantitative social scientists. The classical network models, namely exponential random graph models and stochastic blockmodels, originated in the early theoretical developments of social network analysis. These models are still used today, but their adequacy for modern network data is limited by their inability to account for observed heterogeneity (e.g., sparsity, power law) in network data and the sampling scheme by which network datasets are obtained. This chapter presents these early social network models, namely the p1 model and exponential random graph model, discusses their initial motivations in social network analysis, and explores their limitations in modern network analysis. The discussion of sampling issues in the p1 and exponential random graph models sets the stage for the next chapter on network sampling.

Twitter thread:

Slides for Chapter 2

Chapter 3: Network Sampling

Abstract

The large size of modern networks and the varied circumstances under which network data is obtained necessitates a theory for network analysis under sampling. Although a sampling theory for social network analysis had been developed in the social networks literature by Frank and his coauthors, its relevance to modern network analysis is unclear. Modern network data is often sampled, but not according to any well articulated or well understood design mechanism. In such cases, classical techniques, e.g., the Horvitz-Thompson estimator, which assume a careful and relatively straightforward sampling mechanism, such as simple random vertex sampling, may not be applicable. In many applications, it is known that the network has been sampled, but it is not known how, thus adding additional uncertainty to the observation mechanism over and above ordinary statistical variation. Furthermore, the variety of ways in which network data arise, e.g., by vertex, edge, path, snowball sampling, etc., introduces potential biases which should be addressed when modeling the data. The chapter also discusses how other standard statistical considerations, such as the sample size, study design, and the basic observational units, feature in network analysis.

Twitter thread:

Slides for Chapter 3

Chapter 4: Generative Models

Abstract

As a complement to sampled network data, which are obtained as part of a larger population structure, some networks (e.g., the World Wide Web and some social networks) evolve according to a (possibly unknown) generating mechanism. In such cases, the network data may be a complete census of the network that exists at the time of observation, in which case the objective is to understand the mechanism by which the network is evolving in order to better predict or anticipate future updates to the network. The preferential attachment model (Barabasi and Albert, 1999) is a well known generative model for describing the emergence of power law structure in real world networks. In this brief chapter, I discuss some basic properties of the preferential attachment and other generative models.

Twitter thread:

Slides for Chapter 4

Chapter 5: Statistical Modeling Paradigm

Abstract

Previous chapters highlight the two main considerations of statistical modeling: (i) describe variability and uncertainty in the observed network and (ii) articulate the context in which observations are to be interpreted during inference. Although the common perspective of statistical network modeling only addresses consideration (i), contextual factors, such as sampling design and other circumstances surrounding data collection, are essential to sound network analysis. The modeling paradigm introduced in this chapter brings components (i) and (ii) together in the concept of model coherence. Along with the coherence condition, the paradigm presented in this chapter provides a general modeling framework which can serve as a foundation for theoretical and methodological developments in complex data analysis.

Chapter 6: Vertex Exchangeable Models

Abstract

Invariance principles play a central role in articulating how the observed data 'represents' the system from which it has been observed. In many applications, an invariance principle is chosen so that observations about a sample can be extended to inferences about the population. But in many circumstances, the theoretical justification for extending inferences based on the assumed invariance principle does not align with the context in which the original inferences have been made. In network modeling, vertex exchangeability is one such prominent invariance principle on which a great deal of theoretical work involving network sampling and inference has been based. In a vertex exchangeable model any two graphs that are isomorphic up to relabeling of their vertices are assumed to have equal probability. Implicit in this model property is the assumption that the observed network is that of a representative sample of vertices from the population. In this chapter I discuss some fundamental aspects of vertex exchangeability, including graphon models, the Aldous--Hoover theorem, the theory of dense graph limits, and how the homogeneity properties implied by vertex exchangeability raise doubts about the use of graphons in modern network analysis.

Chapter 7: Getting Beyond Graphons

Abstract

Chapter 6 highlights several limitations of vertex exchangeability and graphon models, especially when it comes to modeling sampled networks exhibiting empirical properties such as sparsity and power law degree distributions. Although vertex exchangeability makes certain theoretical and computational aspects of network analysis tractable, the theory and computations made possible by this assumption are of little practical value because they fail to consider the data in a realistic context for modern network analysis. Some initial attempts to move beyond graphons include so-called `sparse graphon' approaches and the Caron-Fox model based on completely random measures. This chapter discusses both approaches, with additional discussion of the graphex representation and sampling interpretations for the Caron-Fox model.

Chapter 8: Relatively Exchangeable Models

Abstract

Relative exchangeability refines vertex exchangeability (Chapter 6) by defining the invariance of a network in terms of the symmetries of some other (fixed, known) structure on the population of vertices. Stochastic blockmodels (SBMs) are a standard example of relative exchangeability defined with respect to a classification of vertices into distinct groups (or `blocks'). More generally, relatively exchangeable models can account for heterogeneity caused by an underlying social network structure, covariate information (as in the latent space model), or relations described by more generic combinatorial structures. The theory for relatively exchangeable random graphs mirrors that of Chapter \ref{chapter:vertex}, with the Ackerman--Crane--Towsner theorem providing a refinement of the Aldous--Hoover theorem. Further extensions and applications of the theory presented here is a topic for future research.

Chapter 9: Edge Exchangeable Models

Abstract

The Crane-Dempsey edge exchangeable framework posits a new model class for networks constructed by repeated sampling of binary interactions. Edge exchangeability is defined analogously to vertex exchangeability, by assigning equal probability to any two edge-labeled graphs that are isomorphic with respect to relabeling of their edges (instead of vertices). The major breakthrough of edge exchangeability is its its new perspective, which more faithfully captures the way in which many interaction networks are observed (e.g., by sampling edges instead of vertices). The canonical edge exchangeable model, called the Hollywood model, easily accounts for sparse, power law structure in modern network data and provides fertile ground for future methodological developments.

Chapter 10: Relationally Exchangeable Models

Abstract

Relational exchangeability refines edge exchangeability by considering networks constructed by repeated sampling of a general class of interactions, as is common in many networks constructed from database queries (e.g., email communications, scientific coauthorships, and paths in the Internet). So whereas edge exchangeable models are limited to networks constructed from binary interactions, such as caller-receiver pairs in a phone call database, relationally exchangeable models allow for generic interactions, such as multiway interactions between email recipients and coauthors, paths between routers in the Internet, etc. With its inclusion of hyperedge exchangeable and path exchangeable models, relational exchangeability is thus a general variant on the theme of edge exchangeability, which leads to an analogous theory and broader framework to that provided by the edge exchangeable models of Chapter 9.

Link to Publishing statistics for Annals of Statistics cited in this chapter.

Chapter 11: Dynamic Network Models

Abstract

Much of the discussion in previous chapters is focused on data for a single network, such as networks formed by friendships among high school students, social media interactions, professional collaborations, and connectivity in the Internet. Many networks, in addition to representing complex dependencies and interactions, change with respect to time. Thus, in addition to the sampling considerations discussed throughout Chapters 5-10, such dynamic networks introduce a temporal dimension into network modeling which, on top of the considerations involving exchangeability and other invariance principles from previous chapters, must be incorporated into the model in a coherent way. At the time of publication, statistical methods for analyzing modern dynamic networks are relatively underdeveloped. To give a sense of some basic challenges and considerations facing this developing area, I focus this chapter on one specific temporal invariance principle, called Markovian projectivity, which acts as a dynamic version of the consistency under selection property discussed in earlier chapters. Markovian projectivity is presented in the context of two classes of dynamic networks, called rewiring processes and graph-valued Levy processes, both of which are ripe for future development in theory, methodology, and applications.