Stochastic block model
The stochastic block model is a generative model for random graphs. This model tends to produce graphs containing communities, subsets of nodes characterized by being connected with one another with particular edge densities. For example, edges may be more common within communities than between communities. Its mathematical formulation has been firstly introduced in 1983 in the field of social network by Holland et al.[1] The stochastic block model is important in statistics, machine learning, and network science, where it serves as a useful benchmark for the task of recovering community structure in graph data.
Definition[]
The stochastic block model takes the following parameters:
- The number of vertices;
- a partition of the vertex set into disjoint subsets , called communities;
- a symmetric matrix of edge probabilities.
The edge set is then sampled at random as follows: any two vertices and are connected by an edge with probability . An example problem is: given a graph with vertices, where the edges are sampled as described, recover the groups .
Special cases[]
If the probability matrix is a constant, in the sense that for all , then the result is the Erdős–Rényi model . This case is degenerate—the partition into communities becomes irrelevant—but it illustrates a close relationship to the Erdős–Rényi model.
The planted partition model is the special case that the values of the probability matrix are a constant on the diagonal and another constant off the diagonal. Thus two vertices within the same community share an edge with probability , while two vertices in different communities share an edge with probability . Sometimes it is this restricted model that is called the stochastic block model. The case where is called an assortative model, while the case is called disassortative.
Returning to the general stochastic block model, a model is called strongly assortative if whenever : all diagonal entries dominate all off-diagonal entries. A model is called weakly assortative if whenever : each diagonal entry is only required to dominate the rest of its own row and column.[2] Disassortative forms of this terminology exist, by reversing all inequalities. For some algorithms, recovery might be easier for block models with assortative or disassortative conditions of this form.[2]
Typical statistical tasks[]
Much of the literature on algorithmic community detection addresses three statistical tasks: detection, partial recovery, and exact recovery.
Detection[]
The goal of detection algorithms is simply to determine, given a sampled graph, whether the graph has latent community structure. More precisely, a graph might be generated, with some known prior probability, from a known stochastic block model, and otherwise from a similar Erdos-Renyi model. The algorithmic task is to correctly identify which of these two underlying models generated the graph.[3]
Partial recovery[]
In partial recovery, the goal is to approximately determine the latent partition into communities, in the sense of finding a partition that is correlated with the true partition significantly better than a random guess.[4]
Exact recovery[]
In exact recovery, the goal is to recover the latent partition into communities exactly. The community sizes and probability matrix may be known[5] or unknown.[6]
Statistical lower bounds and threshold behavior[]
Stochastic block models exhibit a sharp threshold effect reminiscent of percolation thresholds.[7][3][8] Suppose that we allow the size of the graph to grow, keeping the community sizes in fixed proportions. If the probability matrix remains fixed, tasks such as partial and exact recovery become feasible for all non-degenerate parameter settings. However, if we scale down the probability matrix at a suitable rate as increases, we observe a sharp phase transition: for certain settings of the parameters, it will become possible to achieve recovery with probability tending to 1, whereas on the opposite side of the parameter threshold, the probability of recovery tends to 0 no matter what algorithm is used.
For partial recovery, the appropriate scaling is to take for fixed , resulting in graphs of constant average degree. In the case of two equal-sized communities, in the assortative planted partition model with probability matrix
For exact recovery, the appropriate scaling is to take , resulting in graphs of logarithmic average degree. Here a similar threshold exists: for the assortative planted partition model with equal-sized communities, the threshold lies at . In fact, the exact recovery threshold is known for the fully general stochastic block model.[5]
Algorithms[]
In principle, exact recovery can be solved in its feasible range using maximum likelihood, but this amounts to solving a constrained or regularized cut problem such as minimum bisection that is typically NP-complete. Hence, no known efficient algorithms will correctly compute the maximum-likelihood estimate in the worst case.
However, a wide variety of algorithms perform well in the average case, and many high-probability performance guarantees have been proven for algorithms in both the partial and exact recovery settings. Successful algorithms include spectral clustering of the vertices,[9][4][5][10] semidefinite programming,[2][8] forms of belief propagation,[7][11] and community detection[12] among others.
Variants[]
Several variants of the model exist. One minor tweak allocates vertices to communities randomly, according to a categorical distribution, rather than in a fixed partition.[5] More significant variants include the degree-corrected stochastic block model,[13] the hierarchical stochastic block model,[14] the geometric block model,[15] censored block model and the mixed-membership block model.[16]
Topic models[]
Stochastic block model have been recognised to be a topic model on bipartite networks.[17] In a network of documents and words, Stochastic block model can identify topics: group of words with a similar meaning.
Extensions to signed graphs[]
Signed graphs allow for both favorable and adverse relationships and serve as a common model choice for various data analysis applications, e.g., correlation clustering. The stochastic block model can be trivially extended to signed graphs by assigning both positive and negative edge weights or equivalently using a difference of adjacency matrices of two stochastic block models. [18]
DARPA/MIT/AWS Graph Challenge: streaming stochastic block partition[]
GraphChallenge[19] encourages community approaches to developing new solutions for analyzing graphs and sparse data derived from social media, sensor feeds, and scientific data to enable relationships between events to be discovered as they unfold in the field. Streaming stochastic block partition is one of the challenges since 2017. [20] Spectral clustering has demonstrated outstanding performance compared to the original and even improved[21] base algorithm, matching its quality of clusters while being multiple orders of magnitude faster.[22] [23]
See also[]
- blockmodeling
- Girvan–Newman algorithm – Community detection algorithm
- Lancichinetti–Fortunato–Radicchi benchmark – Algorithm for generating benchmark networks with communities
References[]
- ^ Holland, Paul W; Laskey, Kathryn Blackmond; Leinhardt, Samuel (1983). "Stochastic blockmodels: First steps". Social Networks. 5 (2): 109–137. doi:10.1016/0378-8733(83)90021-7. ISSN 0378-8733. Retrieved 2021-06-16.
- ^ a b c Amini, Arash A.; Levina, Elizaveta (June 2014). "On semidefinite relaxations for the block model". arXiv:1406.5647 [cs.LG].
- ^ a b c Mossel, Elchanan; Neeman, Joe; Sly, Allan (February 2012). "Stochastic Block Models and Reconstruction". arXiv:1202.1499 [math.PR].
- ^ a b c Massoulie, Laurent (November 2013). "Community detection thresholds and the weak Ramanujan property". arXiv:1311.3085 [cs.SI].
- ^ a b c d Abbe, Emmanuel; Sandon, Colin (March 2015). "Community detection in general stochastic block models: fundamental limits and efficient recovery algorithms". arXiv:1503.00609 [math.PR].
- ^ Abbe, Emmanuel; Sandon, Colin (June 2015). "Recovering communities in the general stochastic block model without knowing the parameters". arXiv:1506.03729 [math.PR].
- ^ a b Decelle, Aurelien; Krzakala, Florent; Moore, Cristopher; Zdeborová, Lenka (September 2011). "Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications". Physical Review E. 84 (6): 066106. arXiv:1109.3041. Bibcode:2011PhRvE..84f6106D. doi:10.1103/PhysRevE.84.066106. PMID 22304154. S2CID 15788070.
- ^ a b Abbe, Emmanuel; Bandeira, Afonso S.; Hall, Georgina (May 2014). "Exact Recovery in the Stochastic Block Model". arXiv:1405.3267 [cs.SI].
- ^ Krzakala, Florent; Moore, Cristopher; Mossel, Elchanan; Neeman, Joe; Sly, Allan; Lenka, Lenka; Zhang, Pan (October 2013). "Spectral redemption in clustering sparse networks". Proceedings of the National Academy of Sciences. 110 (52): 20935–20940. arXiv:1306.5550. Bibcode:2013PNAS..11020935K. doi:10.1073/pnas.1312486110. PMC 3876200. PMID 24277835.
- ^ Lei, Jing; Rinaldo, Alessandro (February 2015). "Consistency of spectral clustering in stochastic block models". The Annals of Statistics. 43 (1): 215–237. arXiv:1312.2050. doi:10.1214/14-AOS1274. ISSN 0090-5364. S2CID 88519551.
- ^ Mossel, Elchanan; Neeman, Joe; Sly, Allan (September 2013). "Belief Propagation, Robust Reconstruction, and Optimal Recovery of Block Models". The Annals of Applied Probability. 26 (4): 2211–2256. arXiv:1309.1380. Bibcode:2013arXiv1309.1380M. doi:10.1214/15-AAP1145. S2CID 184446.
- ^ Fathi, Reza (April 2019). "Efficient Distributed Community Detection in the Stochastic Block Model". arXiv:1904.07494 [cs.DC].
- ^ Karrer, Brian; Newman, Mark E J (2011). "Stochastic blockmodels and community structure in networks". Physical Review E. 83 (1): 016107. arXiv:1008.3926. doi:10.1103/PhysRevE.83.016107. PMID 21405744. S2CID 9068097. Retrieved 2021-06-16.
- ^ Peixoto, Tiago (2014). "Hierarchical block structures and high-resolution model selection in large networks". Physical Review X. 4 (1): 011047. arXiv:1310.4377. doi:10.1103/PhysRevX.4.011047. S2CID 5841379. Retrieved 2021-06-16.
- ^ Galhotra, Sainyam; Mazumdar, Arya; Pal, Soumyabrata; Saha, Barna (February 2018). "The Geometric Block Model". AAAI. arXiv:1709.05510.
- ^ Airoldi, Edoardo; Blei, David; Feinberg, Stephen; Xing, Eric (May 2007). "Mixed membership stochastic blockmodels". Journal of Machine Learning Research. 9: 1981–2014. arXiv:0705.4485. Bibcode:2007arXiv0705.4485A. PMC 3119541. PMID 21701698.
- ^ Martin Gerlach; Tiago Peixoto; Eduardo Altmann (2018). "A network approach to topic models". Science Advances. 4 (7): eaaq1360. arXiv:1708.01677. Bibcode:2018SciA....4.1360G. doi:10.1126/sciadv.aaq1360. PMC 6051742. PMID 30035215.
- ^ Alyson Fox; Geoffrey Sanders; Andrew Knyazev (2018). "Investigation of Spectral Clustering for Signed Graph Matrix Representations". 2018 IEEE High Performance extreme Computing Conference (HPEC). doi:10.1109/HPEC.2018.8547575.
- ^ [1] DARPA/MIT/AWS Graph Challenge
- ^ [2] DARPA/MIT/AWS Graph Challenge Champions
- ^ A. J. Uppal; J. Choi; T. B. Rolinger; H. Howie Huang (2021). "Faster Stochastic Block Partition Using Aggressive Initial Merging, Compressed Representation, and Parallelism Control". 2021 IEEE High Performance Extreme Computing Conference (HPEC). doi:10.1109/HPEC49654.2021.9622836.
- ^ David Zhuzhunashvili; Andrew Knyazev (2017). "Preconditioned spectral clustering for stochastic block partition streaming graph challenge". 2017 IEEE High Performance Extreme Computing Conference (HPEC). doi:10.1109/HPEC.2017.8091045.
- ^ Lisa Durbeck; Peter Athanas (2020). "Incremental Streaming Graph Partitioning". 2020 IEEE High Performance Extreme Computing Conference (HPEC). doi:10.1109/HPEC43674.2020.9286181.
- Machine learning
- Random graphs
- Networks