Relaxation time scale of the Monte Carlo dynamics

Get Complete Project Material File(s) Now! »

Worst-case complexity

Time complexity

From an algorithmic point of view, one would be interested in finding an algo-rithm that solves a CSP (i.e. solves all its possible instances) in a reasonable amount of time. Intuitively, an algorithm will require more time to provide an answer for large size instances. There are several ways to define the size of an instance. A natural definition is to take N the number of variables, but one could also have chosen N + M, or |E| the total number of edges in the graphical representation of a CSP. Most of the time one works with instances for which M = αN, with α a constant independent of N, therefore all these possible def-initions are polynomially related. In computational complexity theory [1] it is therefore equivalent to work with any of them, and we will fix the size of the problem to be N.
One defines the time complexity function T (N) of an algorithm as the largest amount of time needed for this algorithm to solve an instance of given size N. Note that this definition depends on the computer used to implement the algorithm. One will not enter into the details, and just state that one uses a ‘realistic’ computer model. Realistic means that such a model can run at most one (or a finite small number) of operations per unit time, where operations are for instance comparison, sum and multiplication. Some examples of realistic model are the one-tape (or multi-tape) Turing machines, and the random-access machines (RAMs). The algorithms can be split into classes according to the behavior of their time complexity function.
1. A polynomial time algorithm has a time complexity function T (N) = O(p(N)) for some polynomial function p.
2. An exponential time algorithm has a time complexity function that can be lower bounded by 2bn for some constant 0 < b.
This distinction is useful because exponential time algorithm have a time com-plexity that explodes when the size increases, therefore they become inefficient for large size instances. One says that a problem is intractable when it is so hard that no polynomial algorithm can solve it. Note that the time-complexity function is a worst-case measure, in the sense that is corresponds to the time needed for the algorithm to solve the hardest instance of a given size. It could happen that the majority of other instances require much less time, and the algorithm would be ‘efficient’ in a large proportion of the instances it encouters. As an example, the k-SAT problem is solved by an exponential time algorithm called DPLL, that one will describe further. As we shall see in the following, experimental observations and rigorous results show however that for instances such that M = αN , with α small enough, this algorithm finds solutions very quickly for the vast majority of instances, even when N gets large.

Complexity classes

The theory of computational complexity classifies the difficulty of the decision problems according to the existence of an algorithm that solve a problem in polynomial time. This classification uses the notion of polynomial reduction. One says that a problem π1 can be reduced to another problem π2 when there is a function that maps an instance of π1 into an instance of π2. To be more precise, let Dπ be the set of instances of problem π. Let Yπ ⊆ Dπ be the subset of instances for which the answer to the decision problem is ‘yes’. Then a reduction is a function f : Dπ1 → Dπ2 such that for all I1 ∈ Dπ1 , one has I1 ∈ Yπ1 if and only if f (I1) ∈ Yπ2 . A polynomial time reduction is a reduction f that is computable by a polynomial time algorithm in the size of the problem π1. One requires additionally that the size of f(I1) is polynomial in the size of I1. The relation « π1 can be reduced polynomially to π2 » is denoted π1 ∝ π2. This relation is transitive:
Lemma 1 (Transitivity) if π1 ∝ π2 and π2 ∝ π3, then π1 ∝ π3.
One defines the two following classes of problems:
1. The polynomial class P is the set of problems that can be solved by a polynomial time algorithm.
2. The non-deterministic polynomial class NP is the set of problems that have a polynomial time verifiability. This means when one provides a certificate that allows to check a ‘yes’ answer to the problem, the verification itself can be done in polynomial time.
The term non-deterministic comes from an alternative definition of the NP class, as the set of problems that can be solved by a polynomial time non-deterministic algorithm. A non-deterministic algorithm is composed of two stages. In the first stage, a certificate is proposed, and in the second stage, the certificate is verified. A non-deterministic algorithm is said to operate in polynomial time if the second stage is done in polynomial time, disregarding the first stage that might have required more time. Roughly speaking, a non-deterministic polynomial algorithm can pursue alternate paths or branches in parallel, and says ‘yes’ if any of these branches says ‘yes’.
Note that if one has a polynomial reduction from π1 to π2, then π1 cannot be harder than π2. Suppose indeed that one has a polynomial algorithm that solves π2 . One can construct a polynomial algorithm to solve π1 as follows. Given an instance I1 ∈ Dπ , this algorithm just applies the transformation f to get an instance I2 = f(I1)1of π2, and applies the polynomial algorithm to solve the instance I2. Therefore:
Lemma 2 If π1 ∝ π2, then π2 ∈P implies π1 ∈P (and equivalently π1 ∈/P implies π2 ∈/P) Note that a problem π in P is automatically in NP. If there is a polynomial time algorithm that solves π, this algorithm can be converted into a checking algorithm that ignores the certificate that is provided, and simply returns its answer. Therefore P⊆NP.
A natural question is then to ask if P=NP, or if there exist problems in NP that are not in P. This is an open question, but it is widely conjectured that P=6NP. This conjecture is supported by the existence of another class of NP problems gathering the hardest problems in NP. It is called the NP-complete class (NP-c). A problem π is in NP-c if it satisfies the two conditions:
1. π ∈NP
2. Any problem π0 in NP can be polynomially reduced to it: π0 ∝ π
If one problem in NP-c could be solved in polynomial time, then all the problems in NP could be solved in polynomial time thanks to the polynomial reduction, and one would have P=NP. On the other hand, if one problem in NP is in-tractable, then all the problems in NP-c are intractables. Some problems in NP-c, such as the satisfiability problem, have been studied for decades without any proof that they belong to P, that is why one believes that P=6NP. Figure 1.2 illustrates the conjecture on the structure of the sets NP, P and NP-c.
From the definition of the NP-c class, proving that a problem π ∈NP belongs to NP-c requires to prove that any problem π0 ∈NP can be polynomially reduced to π. This seems difficult to achieve, but once one has proved that there is at least one problem in NP-c, the proof can be simplified thanks to this Lemma:
Lemma 3 Let π1 ∈NP-c. If π2 ∈NP, and π1 ∝ π2, then π2 ∈NP-c.
Indeed since π1 ∈NP-c then for any problem π0 ∈NP one has π0 ∝ π1, then by transitivity π0 ∝ π2 . The first problem that has been proved to be NP-complete is the satisfiability problem, by Stephen Cook in 1971 [41]
Theorem 1 (Cook,1971) The satisfiability problem is NP-complete.
We will not give the proof, but solely remark that the SAT problem has a very universal structure, that allows to re-write any decision problem into the SAT problem. Once we know that the SAT problem is in NP-c, one can apply the lemma 3 to prove that another problem π is NP-c. The method should follow these steps:
1. Show that π ∈NP.
2. Select a known problem π0 ∈NP-c
3. Construct a reduction f from π0 to π
4. Prove that f is a polynomial reduction.
Using this method, hundreds of problems have been shown to belong to NP-c.
The table 1.1 gathers the problems that one has introduced above:

An algorithm for the SAT problem

The Unit Clause Propagation procedure

We have seen that the 2-SAT problem is in P. We present here a polynomial algorithm that solves it. The algorithm is a sequential assignment procedure, which means that at each time step a variable is assigned to a given value. The algorithm ends either when all the variables are assigned, the assignment obtained being SAT, or when it has proven that the formula is UNSAT.
Each time a variable is assigned, one can simplify the initial CNF formula according to the following reduction procedure. Suppose that one has assigned the variable i to xi = 1 (the case xi = 0 is symmetric). Each clause containing the literal xi is satisfied by this assignment, therefore can be removed from the formula. In each clause containing the opposite literal xi, one removes the literal xi since it cannot satisfy the clause. The length of these clauses is then reduced by 1. One denotes F |{xi = 1} the simplified formula obtained with this procedure. This reduction procedure might produce a 0-clause, namely if in the formula there a unit clause (a 1-clause) of the form c = (xi), that is violated by the choice xi = 1. One calls this a contradiction, it means that the simplified formula F |{xi = 1} is UNSAT, therefore to construct a SAT assignment for F , the choice xi = 0 is necessary.
If at some step the formula obtained contains a unit clause, one is forced to satisfy it, and the simplified formula thus obtained might contain new unit clauses that one will have to satisfy in turn. This sequence of forced steps is called Unit Clause Propagation (UCP), and is done with the recursive UCP procedure (see algorithm 1). This procedure takes as input a CNF formula F (possibly containing unit clauses), a partial assignment of variables A, and a set of variables V . In the first call we set F to be the initial CNF formula, and
A = ∅, V = {1,…,N}
Algorithm 1 UCP (F ,A,V )
if there is a 0-clause in F then
return UNSAT
end if
while there exist unit clauses do
Pick a unit clause in F , say c = (li) and satisfy it
Add the assignment of the variable xi in A
if there is a 0-clause in F then
return UNSAT
end if
end while
return A
The UCP procedure (1) returns all the assignments that were forced due to the presence of unit clauses. There is three possible outputs for the UCP procedure:
1. the output is an assignment of all the variables in V , then F is SAT and the assignment produced is a solution to F .
2. the output is a partial assignment. This partial assignment can be ex-tended to a complete SAT assignment if and only if the input formula F is SAT. The simplified formula obtained does not contain unit clauses.
3. the output is UNSAT, therefore the input formula F is UNSAT.
The algorithm to solve 2-SAT uses the UCP procedure. It works as follows: given a 2-SAT formula F over a set of variables V , choose a variable i ∈ V , and fix xi = 1. Then call the UCP procedure UCP(F |{xi = 1},{xi = 1},V \ i). If the output has assigned all the variables, declare F satisfiable, and return the SAT assignment found. If it is only a partial assignment, keep it in memory, and let F 0, V 0 be the simplified formula and the set of not-yet assigned variables. Choose a new i0 variable in V 0, and restart the UCP procedure UCP(F 0|{xi0 = 1},{xi0 = 1},V 0 \ i0).
If the output is UNSAT, it means that the initial choice has to be changed. Thus set xi = 0 and restart the UCP procedure with UCP(F |{xi = 0},{xi = 0},V \ i). If once again the procedure outputs UNSAT, then declare that the initial formula F is UNSAT. If not one can keep the partial assignment found and repeat the procedure. The step in which one chooses arbitrarily the value of a variable in a 2-clause is called a free step. By opposition, the assignment of a variable in a unit clause is called a forced step. The step in which we came back to the arbitrary choice done at the free step xi = 1 to change it to xi = 0 is called a backtracking step.
One can measure the complexity of this algorithm by the number of variable-fixing operations. For a 2-SAT formula, S. Cook showed that this algorithm has a polynomial complexity [41]. Note however that the algorithm presented above does not provide a method for solving the optimization problem MAX-2-SAT. In fact this problem has been shown to be in NP-c by M.R Garey, D.S Johnson and L Stockmeyer in [42] in 1976.

The DPLL algorithm for the SAT problem

The above algorithm can in fact be slightly modified to work on a CNF formula containing clauses of arbitrary length. In 2-SAT formulas, since any free choice produces unit clauses, one only needs to do backtracking steps on the last vari-able assigned on a free choice. All the other variables fixed at previous free steps are fixed, since they belong to a partial assignment compatible with at least one solution (if it exists). When there are clauses of length greater than 2 instead, one might have to do several free steps before having the possibility to apply the UCP procedure. If these choices lead to a contradiction, the backtracking then have to explore all the possible choices for these free variables before concluding that the formula is UNSAT. The number of backtracking steps might explode, leading to an exponential time complexity. This algorithm is called the Davis Putnam Logemann Loveland (DPLL) algorithm, from the authors names, and has been introduced in 1962 [43]. It is described by the algorithm 2 (see [3] p.726).
Algorithm 2 DPLL (INPUT: a SAT formula F , OUTPUT: is F satisfiable ?)
If F is empty then return SAT
If F contains and empty clause then return UNSAT
Select an unset variable xi
If F |{xi = 1} = 1 then return SAT
If F |{xi = 0} = 1 then return SAT
Return UNSAT
The iterations of DPLL can be represented by a decision tree, as shown in figure 1.3 (the example is inspired from [44]).
It starts at the root of the tree, with the initial formula F containing all the variables to be assigned. A first variable is chosen and assigned to a value (x1 = 1 in the example). The left-node of the first generation represent the simplified formula obtained with this choice. Since no unit clauses are produced, one needs to make another arbitrary choice: x2 = 1 in the example. If there is a unit clause, the algorithm fixes the value of one variable present in a unit clause. When the algorithm finds a contradiction, it backtracks to the last variable assigned during a free step, and restarts the search from this node.
From this representation, one can measure the running time of DPLL on the formula F , by counting the number of nodes of the tree. It is intuitively clear why DPLL have an exponential complexity on SAT formulas with clauses of length greater than 2. If most of the variables have been fixed during a free step, and all the free choices need to be backtracked, the number of steps is roughly the number of nodes of a binary tree of depth N , which is O(2N ). On a 2-SAT formula instead, each free choice is followed by a cascade of forced steps (possibly O(N ) of them). Even if all the variables have been fixed during a free step, and all the free steps have been backtracked, the complexity is still polynomially bounded by N2.
In the algorithm 2, one has not specified how to choose the variable during a free step. One can use heuristics that aim at increasing the performances of DPLL. For instance, one can decide to pick a variable that will generate many unit clauses, or a variable that will satisfy the largest number of clauses. Since there is no good criterion to decide which heuristic to use, one would have to experiment them on the formula one has to solve.

READ  HEORIES OF LEARNING, LEARNING STYLE AND METHODS OF TEACHING

Performances of the algorithms and random CSP ensembles

Complete and incomplete algorithms

An algorithm that solves a CSP, i.e. that provides an answer to all the possible instances of the CSP, is called a complete algorithm. DPLL is an example of complete algorithm for the SAT problem. By opposition, an incomplete algorithm is not guaranteed to provide an answer to any possible instances. In a decision problem, proving that a instance has a solution is often easier than proving that there is no solution. In the SAT problem for instance, a certificate for a ‘yes’ answer (to the question ’Is there a solution ?’) is provided by a solution to the problem, that is an assignment of the N variable, therefore having size N. A certificate for the answer ‘no’ instead might have an exponential length, if for instance one tries to exhibit all the possible assignments, showing that they all are UNSAT. The display of this certificate by itself have an exponential time complexity. An incomplete algorithm only tries to find a solution, and answers ’I don’t know’ whenever it is unable to provide one, or to prove that the problem has no solution.
We will present in the following chapters several incomplete algorithms for CSPs. Although they do not solve all the possible instances of the problem, it is observed experimentally that they are more efficient than the complete algorithms, on typical instances. To characterize their performances, one needs to precise what we mean by typical instances. This can be done by introducing random ensembles of instances. One then study the performances of algorithms on instances drawn from this ensemble.

Random CSP ensembles

We have seen that an instance of CSP can be represented with a hypergraph. Therefore one can use random graph ensembles to define the random CSP en-sembles. The Erdős Rényi (ER) ensemble GN (k, M) is an example of random hypergraph ensemble. An instance of GN (k, M) is drawn by choosing indepen-dently for each clause a k-tuple of variables uniformly at random among the Nk possible ones. Note that while the hyperedges a ∈ C have a fixed degree k, the degree of the vertices i ∈ V is not fixed.
We will also use another random hypergraph ensemble, called the k-uniform (l + 1)-regular random hypergraph ensemble (the choice (l + 1) is made here for convenience when using the cavity method, see chapter 7). In this ensemble both the hyperedges and the vertices have a fixed degree k and l + 1. All the hypergraphs with this property are equiprobable in this ensemble.
To define a random ensemble for the SAT problem, we need to specify how to draw the signs Jia. The random k-SAT ensemble is one of the most studied random ensembles for the k-SAT problem. An instance is obtained by drawing a hypergraph from GN (k, M), then independently for each variable in each k-tuple, one chooses a sign Jia with probability 1/2.
It is useful to introduce the density of constraints α = M/N, and to work with the ensemble GN (k, αN). For the k-uniform (l + 1)-regular random hy-pergraph ensemble, since N and M must satisfy the relation N(l + 1) = M k, the density of constraints is related to the degrees as l + 1 = αk. Intuitively, in-creasing the density of constraints, the number of solutions should shrink, since it is harder to satisfy an overconstrained instance. At high density, we will see that most of the instances are in fact UNSAT.
We define the thermodynamic (large size) limit as N, M → ∞ with a fixed ratio α = M/N. In this limit, several properties of the random k-SAT en-semble concentrate around their mean value. This is called the self-averaging phenomenon. In particular, in this limit many random CSPs (including the ran-dom k-SAT ensemble) exhibit threshold phenomena. The probability of some property jumps abruptly from 1 to 0 when varying the control parameter α. We say that a property of the instances is typical when its probability goes to 1 in the thermodynamic limit.
The most prominent threshold is the satisfiability threshold αsat(k). Be-low αsat(k), typical instances are SAT, while above αsat(k), typical instances are UNSAT. In the satisfiability phase α < αsat(k), many other phase tran-sitions concerning the properties of the set of solutions of a random instance are predicted by the so-called cavity method, that we shall describe in the next chapters.
Among them, the clustering transition describes a drastic change in the structure of the set of solutions in the space of configuration. Below the cluster-ing threshold, the set of solutions is rather well-connected, any solution can be reached from any other one by nearby intermediate solutions, while above the clustering threshold the solution set splits in a large number of distinct groups of solutions, called clusters, which are internally well-connected but well separated one from the other.

 Performances of DPLL on the random k-SAT en-semble

Our aim is to study and compare the performances of algorithms on a random ensemble of instances. We start by giving the experimental study of the DPLL algorithm on the random k-SAT ensemble.
For a fixed value k, one can plot the fraction of UNSAT instances found by DPLL as a function of α for several sizes N. The result for k = 3 is shown in figure 1.4 taken from the study of S. Kirkpatrick an B. Selman in 1994 [45] One can see that it is a increasing function of α, that goes from 0 to 1. As N increases, the drop becomes sharper around a threshold value. This is a numerical evidence of the satisfiability threshold.
One is interested in the running time of DPLL on random instances of k-SAT. On figure 1.5 taken from the study of D. Mitchell, B. Selman, H. Levesque in 1992 [46], the median running time is plotted against α. One can see that the running time has a peak around the satisfiability threshold. The height of this peak increases rapidly with N . At smaller and larger values of α instead, the running time is much smaller, and grows with N at a smaller rate. These observations indicate that the instances drawn from the random k-SAT ensemble are harder when α is close to αsat(k ). In the region of smaller α, the instances are underconstrained and therefore easy to solve. In the region of larger α, the instances are so overconstrained that DPLL finds quickly a contradiction showing that the instance is UNSAT.
In [47], P. Beame, R.Karp, T. Pitassi, and M. Saks show that the size of a certificate for an unsatisfiable instance of random 3-SAT is with high probability (w.h.p.) bounded from above by 2cN/α, with c some contant, where « with high probability » means with probability going to one in the large size limit N, M → ∞ at a fixed ratio α = M/N. This results confirms the decreasing complexity observed in the unsatisfiable phase when α increases above the satisfiability threshold αsat. In [48], A. Frieze and S. Suen show that in the satisfiable phase a modified version of DPLL without backtracking (with the unit clause branching rule) can find solutions efficiently for small enough densities: up to α ≤ 3.003 for k = 3, which is strictly smaller than the satisfiability threshold prediction αsat(k = 3) = 4.267 ([7]). This confirms the experimental observation that DPLL works in polynomial time for small enough densities. In [49], S. Cocco and R. Monasson provide a theoretical estimation of the algorithmic threshold for DPLL on random 3-SAT, introducing the random (2 + p)-SAT to study the evolution of the formula under the sequential assignments, with p the fraction of 3-clauses in the formula. They predict that above the density α ‘ 3.003, the running time of DPLL with the unit clause branching rule becomes typically exponential.

The algorithmic barrier

The numerical study of the performances of DPLL on the random k-SAT en-semble indicates that in the region of α close to the satisfiability threshold the typical instances are hard to solve. In the next chapter we will introduce sev-eral incomplete algorithm that are designed to search for solutions, and we will compare their performances on the random ensembles. In particular, one is in-terested in determining the interval of α where it is possible to find an algorithm that is successful on typical instances. As we have seen this interval has to be in the satisfiable region α < αsat(k), but one could ask whether there exists a range in which typical instances have solutions, but no algorithm is able to find one. More precisely, one is interested in the putative algorithmic barrier αalg(k), above which no algorithm is able to find a solution in polynomial time, assuming P=6NP, and for typical instances. Is it possible to show that αalg(k) coincides with αsat(k), by exhibiting an algorithm efficient in the entire satisfiable region? Or are there limitations that force the strict inequality αalg(k) < αsat(k)? The structure of the set of solutions of typical instances undergoes a series of phase transitions in the satisfiable phase, that are predicted by the cavity method. It is therefore interesting to know if some of these phase transitions affect the per-formances of algorithms. For instance, algorithms such as Monte Carlo Markov Chains for the resolution of CSPs are affected by the clustering transition.

Table of contents :

Introduction 
1 Definitions 
1.1 Constraint Satisfaction Problems
1.1.1 Definitions
1.1.2 The satisfiability problem
1.1.3 A graphical representation
1.1.4 More examples of CSP
1.1.5 Other combinatorial optimization problems
1.2 Worst-case complexity
1.2.1 Time complexity
1.2.2 Complexity classes
1.3 An algorithm for the SAT problem
1.3.1 The Unit Clause Propagation procedure
1.3.2 The DPLL algorithm for the SAT problem
1.4 Performances of the algorithms and random CSP ensembles
1.4.1 Complete and incomplete algorithms
1.4.2 Random CSP ensembles
1.4.3 Performances of DPLL on the random k-SAT ensemble
1.4.4 The algorithmic barrier
1.4.5 Some properties of the random hypergraph ensembles
1.5 Statistical physics and constraint satisfaction problems
1.5.1 Boltzmann probability measure
1.5.2 Statistical physics models
1.5.3 Graphical models
2 Phase transitions in random CSPs
2.1 The satisfiability transition
2.1.1 Upper bounds
2.1.2 Lower bounds
2.2 Quenched and annealed averages
2.3 Overview of the phase transitions in random CSPs
2.3.1 The clustering transition
2.3.2 Cluster decomposition
2.3.3 Complexity and the condensation transition
2.3.4 Computing the complexity function
2.3.5 Computing c and sat from the complexity
2.3.6 Rigidity and freezing transitions
2.3.7 Some values of the thresholds
3 Local Search Algorithms 
3.1 Simulated Annealing
3.1.1 Monte Carlo method
3.1.2 Metropolis algorithm
3.1.3 Heat bath algorithm
3.1.4 Relaxation time scale of the Monte Carlo dynamics
3.1.5 Cooling scheme
3.2 Focused algorithms
3.2.1 RandomWalkSAT algorithm
3.2.2 WalkSAT algorithm
3.2.3 Focused Metropolis algorithm
4 Message Passing Algorithms 
4.1 Belief propagation
4.1.1 BP messages
4.1.2 BP iterations
4.1.3 Marginals
4.1.4 Bethe free entropy and Entropy
4.1.5 Hard-fields
4.1.6 Warning Propagation
4.1.7 Survey Propagation
4.2 Algorithms
4.2.1 Sampling procedure
4.2.2 BP-guided decimation
4.2.3 SP-guided decimation
5 Performance of the algorithms on random Constraint Satisfaction Problems 
5.1 Small connectivity k
5.1.1 Two numerical experiments on local search algorithms at small k
5.1.2 Overview of the algorithmic performances for small values of k
5.1.3 Frozen variables and whitening dynamics
5.1.4 Analytical study of the BP-guided decimation
5.2 Large connectivity k
5.2.1 Asymptotic expansion of the thresholds
5.2.2 Algorithmic performances in the large k limit
6 Biased Measures for random Constraint Satisfaction Problems 
6.1 Definitions
6.1.1 Biased measure over the set of solutions
6.1.2 Intra-clause bias
6.1.3 Bias with interactions at distance 1
6.1.4 Bias with interactions at larger distance
6.2 Biased measures studied in the literature
6.2.1 Bias counting the frozen variables
6.2.2 Local entropy
6.2.3 Biased interactions in the hard sphere packing problem
7 Cavity Method 
7.1 Definition of the model
7.1.1 Graphical model
7.1.2 BP equations and Bethe free-energy
7.2 Replica symmetric cavity method
7.2.1 RS cavity equations
7.2.2 Intra-clause bias
7.2.3 Bias with interaction at distance 1
7.3 The dynamic transition
7.3.1 The broadcast process
7.3.2 The reconstruction problem and its recursive distributional equations
7.4 Hard fields and the naive reconstruction
7.4.1 Intra-clause bias
7.4.2 Bias with interactions at distance 1
7.5 The distribution of soft-fields
7.5.1 Uniform measure
7.5.2 Bias with interaction at distance 1
7.6 1RSB formalism
7.6.1 1RSB cavity equations
7.6.2 Simplifications for X = 1
7.6.3 Complexity (X = 1) and condensation threshold
7.7 Kesten-Stigum bound
7.7.1 Intra-clause bias
7.7.2 Bias with interactions at distance 1
8 Finite k results 
8.1 Numerical resolution for finite k
8.2 On the numerical determination of the dynamic transition
8.2.1 Scalar bifurcations
8.2.2 Discontinuous functional bifurcations
8.2.3 The stability parameter in the functional case
8.3 Results of the cavity method
8.3.1 The existence of a RS phase for > d,u
8.3.2 More detailed zero temperature phase diagrams
8.4 Results of Simulated Annealing
8.4.1 Estimating the algorithmic threshold for Simulated Annealing
8.4.2 Performances of Simulated Annealing with optimal RS parameters
8.4.3 Performances of Simulated Annealing with the biased measure
8.5 Comparison with the biased measure with interactions at distance
9 The asymptotics of the clustering transition for the uniform measure 
9.1 The large k limit for a finite distance n
9.1.1 Evolution of the hard fields
9.1.2 The reduced order parameter
9.1.3 Evolution of the soft fields distribution
9.2 The limit of large distance n
9.2.1 The regime
9.2.2 The difficulties for < 1
9.2.3 Reweighted probability distributions
9.2.4 Numerical resolution
9.2.5 The fixed point equation and the determination of d
9.2.6 An analytic lower bound on d
10 The asymptotics of the clustering transition for biased measures
10.1 A first upper-bound for the intra-clause bias
10.2 The asymptotics of the clustering threshold
10.2.1 Setting
10.2.2 A specialization of some formulas
10.3 The large k limit for a finite distance n
10.3.1 Evolution of the hard fields
10.3.2 Evolution of the soft fields distribution
10.4 The limit of large distance n
10.4.2 A reweighting scheme
10.4.3 A Gaussian approximation for the quasi-hard fields
10.4.4 Algorithmic implementation
10.5 Results
Conclusion 
Bibliography

GET THE COMPLETE PROJECT

Related Posts