Submodular set function

From Wikipedia, the free encyclopedia

In mathematics, a submodular set function (also known as a submodular function) is a set function whose value, informally, has the property that the difference in the incremental value of the function that a single element makes when added to an input set decreases as the size of the input set increases. Submodular functions have a natural diminishing returns property which makes them suitable for many applications, including approximation algorithms, game theory (as functions modeling user preferences) and electrical networks. Recently, submodular functions have also found immense utility in several real world problems in machine learning and artificial intelligence, including automatic summarization, multi-document summarization, feature selection, active learning, sensor placement, image collection summarization and many other domains.[1][2][3][4]

Definition[]

If is a finite set, a submodular function is a set function , where denotes the power set of , which satisfies one of the following equivalent conditions.[5]

  1. For every with and every we have that .
  2. For every we have that .
  3. For every and such that we have that .

A nonnegative submodular function is also a subadditive function, but a subadditive function need not be submodular. If is not assumed finite, then the above conditions are not equivalent. In particular a function defined by if is finite and if is infinite satisfies the first condition above, but the second condition fails when and are infinite sets with finite intersection.

Types of submodular functions[]

Monotone[]

A submodular function is monotone if for every we have that . Examples of monotone submodular functions include:

Linear (Modular) functions
Any function of the form is called a linear function. Additionally if then f is monotone.
Budget-additive functions
Any function of the form for each and is called budget additive.[citation needed]
Coverage functions
Let be a collection of subsets of some ground set . The function for is called a coverage function. This can be generalized by adding non-negative weights to the elements.
Entropy
Let be a set of random variables. Then for any we have that is a submodular function, where is the entropy of the set of random variables , a fact known as Shannon's inequality.[6] Further inequalities for the entropy function are known to hold, see entropic vector.
Matroid rank functions
Let be the ground set on which a matroid is defined. Then the rank function of the matroid is a submodular function.[7]

Non-monotone[]

A submodular function which is not monotone is called non-monotone.

Symmetric[]

A non-monotone submodular function is called symmetric if for every we have that . Examples of symmetric non-monotone submodular functions include:

Graph cuts
Let be the vertices of a graph. For any set of vertices let denote the number of edges such that and . This can be generalized by adding non-negative weights to the edges.
Mutual information
Let be a set of random variables. Then for any we have that is a submodular function, where is the mutual information.

Asymmetric[]

A non-monotone submodular function which is not symmetric is called asymmetric.

Directed cuts
Let be the vertices of a directed graph. For any set of vertices let denote the number of edges such that and . This can be generalized by adding non-negative weights to the directed edges.

Continuous extensions[]

Lovász extension[]

This extension is named after mathematician László Lovász. Consider any vector such that each . Then the Lovász extension is defined as where the expectation is over chosen from the uniform distribution on the interval . The Lovász extension is a convex function if and only if is a submodular function.

Multilinear extension[]

Consider any vector such that each . Then the multilinear extension is defined as .

Convex closure[]

Consider any vector such that each . Then the convex closure is defined as . The convex closure of any set function is convex over . It can be shown that for submodular functions.

Concave closure[]

Consider any vector such that each . Then the concave closure is defined as .

Properties[]

  1. The class of submodular functions is closed under non-negative linear combinations. Consider any submodular function and non-negative numbers . Then the function defined by is submodular.
  2. For any submodular function , the function defined by is submodular.
  3. The function , where is a real number, is submodular whenever is monotone submodular. More generally, is submodular, for any non decreasing concave function .
  4. Consider a random process where a set is chosen with each element in being included in independently with probability . Then the following inequality is true where is the empty set. More generally consider the following random process where a set is constructed as follows. For each of construct by including each element in independently into with probability . Furthermore let . Then the following inequality is true .[citation needed]

Optimization problems[]

Submodular functions have properties which are very similar to convex and concave functions. For this reason, an optimization problem which concerns optimizing a convex or concave function can also be described as the problem of maximizing or minimizing a submodular function subject to some constraints.

Submodular set function minimization[]

The simplest minimization problem is to find a set which minimizes a submodular function; this is the unconstrained problem. This problem is computable in (strongly)[8][9] polynomial time.[10][11] Computing the minimum cut in a graph is a special case of this general minimization problem. However, adding even a simple constraint such as a cardinality lower bound makes the minimization problem NP hard, with polynomial factor lower bounds on the approximation factor.[12][13]

Submodular set function maximization[]

Unlike the case of minimization, maximizing a submodular functions is NP-hard even in the unconstrained setting. Theory and enumeration algorithms for finding local and global maxima (minima) of submodular (supermodular) functions can be found in B. Goldengorin. European Journal of Operational Research 198(1):102-112, DOI: 10.1016/j.ejor.2008.08.022. For instance max cut is a special case even when the function is required only to be non-negative. The unconstrained problem can be shown to be inapproximable if it is allowed to be negative. There has been extensive work on constrained submodular function maximization when the functions are non-negative. Typically, the approximation algorithms for these problems are based on either greedy algorithms or local search algorithms. The problem of maximizing a non-negative symmetric submodular function admits a 1/2 approximation algorithm.[14] Computing the maximum cut of a graph is a special case of this problem. The more general problem of maximizing a non-negative submodular function also admits a 1/2 approximation algorithm.[15] The problem of maximizing a monotone submodular function subject to a cardinality constraint admits a approximation algorithm.[16][page needed][17] The maximum coverage problem is a special case of this problem. The more general problem of maximizing a monotone submodular function subject to a matroid constraint also admits a approximation algorithm.[18][19][20] Many of these algorithms can be unified within a semi-differential based framework of algorithms.[13]

Related optimization problems[]

Apart from submodular minimization and maximization, another natural problem is Difference of Submodular Optimization.[21][22] Unfortunately, this problem is not only NP hard, but also inapproximable.[22] A related optimization problem is minimize or maximize a submodular function, subject to a submodular level set constraint (also called submodular optimization subject to submodular cover or submodular knapsack constraint). This problem admits bounded approximation guarantees.[23] Another optimization problem involves partitioning data based on a submodular function, so as to maximize the average welfare. This problem is called the submodular welfare problem.[24]

Applications[]

Submodular functions naturally occur in several real world applications, in economics, game theory, machine learning and computer vision. Owing to the diminishing returns property, submodular functions naturally model costs of items, since there is often a larger discount, with an increase in the items one buys. Submodular functions model notions of complexity, similarity and cooperation when they appear in minimization problems. In maximization problems, on the other hand, they model notions of diversity, information and coverage. For more information on applications of submodularity, particularly in machine learning, see [4][25][26]

See also[]

Citations[]

  1. ^ H. Lin and J. Bilmes, A Class of Submodular Functions for Document Summarization, ACL-2011.
  2. ^ S. Tschiatschek, R. Iyer, H. Wei and J. Bilmes, Learning Mixtures of Submodular Functions for Image Collection Summarization, NIPS-2014.
  3. ^ A. Krause and C. Guestrin, Near-optimal nonmyopic value of information in graphical models, UAI-2005.
  4. ^ Jump up to: a b A. Krause and C. Guestrin, Beyond Convexity: Submodularity in Machine Learning, Tutorial at ICML-2008
  5. ^ (Schrijver 2003, §44, p. 766)
  6. ^ "Information Processing and Learning" (PDF). cmu.
  7. ^ Fujishige (2005) p.22
  8. ^ Iwata, S.; Fleischer, L.; Fujishige, S. (2001). "A combinatorial strongly polynomial algorithm for minimizing submodular functions". J. ACM. 48 (4): 761–777. doi:10.1145/502090.502096. S2CID 888513.
  9. ^ Schrijver, A. (2000). "A combinatorial algorithm minimizing submodular functions in strongly polynomial time". J. Combin. Theory Ser. B. 80 (2): 346–355. doi:10.1006/jctb.2000.1989.
  10. ^ Grötschel, M.; Lovasz, L.; Schrijver, A. (1981). "The ellipsoid method and its consequences in combinatorial optimization". Combinatorica. 1 (2): 169–197. doi:10.1007/BF02579273. hdl:10068/182482. S2CID 43787103.
  11. ^ Cunningham, W. H. (1985). "On submodular function minimization". Combinatorica. 5 (3): 185–192. doi:10.1007/BF02579361. S2CID 33192360.
  12. ^ Z. Svitkina and L. Fleischer, Submodular approximation: Sampling-based algorithms and lower bounds, SIAM Journal on Computing (2011).
  13. ^ Jump up to: a b R. Iyer, S. Jegelka and J. Bilmes, Fast Semidifferential based submodular function optimization, Proc. ICML (2013).
  14. ^ U. Feige, V. Mirrokni and J. Vondrák, Maximizing non-monotone submodular functions, Proc. of 48th FOCS (2007), pp. 461–471.
  15. ^ N. Buchbinder, M. Feldman, J. Naor and R. Schwartz, A tight linear time (1/2)-approximation for unconstrained submodular maximization, Proc. of 53rd FOCS (2012), pp. 649-658.
  16. ^ G. L. Nemhauser, L. A. Wolsey and M. L. Fisher, An analysis of approximations for maximizing submodular set functions I, Mathematical Programming 14 (1978), 265–294.
  17. ^ Williamson, David P. "Bridging Continuous and Discrete Optimization: Lecture 23" (PDF).
  18. ^ G. Calinescu, C. Chekuri, M. Pál and J. Vondrák, Maximizing a submodular set function subject to a matroid constraint, SIAM J. Comp. 40:6 (2011), 1740-1766.
  19. ^ M. Feldman, J. Naor and R. Schwartz, A unified continuous greedy algorithm for submodular maximization, Proc. of 52nd FOCS (2011).
  20. ^ Y. Filmus, J. Ward, A tight combinatorial algorithm for submodular maximization subject to a matroid constraint, Proc. of 53rd FOCS (2012), pp. 659-668.
  21. ^ M. Narasimhan and J. Bilmes, A submodular-supermodular procedure with applications to discriminative structure learning, In Proc. UAI (2005).
  22. ^ Jump up to: a b R. Iyer and J. Bilmes, Algorithms for Approximate Minimization of the Difference between Submodular Functions, In Proc. UAI (2012).
  23. ^ R. Iyer and J. Bilmes, Submodular Optimization Subject to Submodular Cover and Submodular Knapsack Constraints, In Advances of NIPS (2013).
  24. ^ J. Vondrák, Optimal approximation for the submodular welfare problem in the value oracle model, Proc. of STOC (2008), pp. 461–471.
  25. ^ http://submodularity.org/.
  26. ^ J. Bilmes, Submodularity in Machine Learning Applications, Tutorial at AAAI-2015.

References[]

  • Schrijver, Alexander (2003), Combinatorial Optimization, Springer, ISBN 3-540-44389-4
  • Lee, Jon (2004), A First Course in Combinatorial Optimization, Cambridge University Press, ISBN 0-521-01012-8
  • Fujishige, Satoru (2005), Submodular Functions and Optimization, Elsevier, ISBN 0-444-52086-4
  • Narayanan, H. (1997), Submodular Functions and Electrical Networks, ISBN 0-444-82523-1
  • Oxley, James G. (1992), Matroid theory, Oxford Science Publications, Oxford: Oxford University Press, ISBN 0-19-853563-5, Zbl 0784.05002

External links[]

Retrieved from ""