Get Complete Project Material File(s) Now! »
Algorithms’ Description
L1-Norm minimization
From equation (1), we have an under determined linear system = which leads to infinite number of solutions. However not all the solutions are sudoku solutions. In fact, for the sudoku having a unique solution there is one and only solution which consists of only 0’s and 1’s. Therefore we have to look for one particular solution among many solutions. This problem leads us to optimization theory literature which strives to find the optimal solution through some cost function.
Recently emerged field of compressive sensing uses the notion of sparse signal processing. A signal is sparse if it has few none zero elements. The idea in compressive sensing is to reconstruct a signal from far fewer measurements than the traditional methods used. As we can show that sudoku solution is the sparsest solution of = , we can use the ideas presented in the literature of compressive sensing to get to the solution, where we can find that one of the ideal cost function for finding the sparsest solution of (1) is l1-norm.
As described in [5], it can be shown easily that the solution , which is unique solution of sudoku, is the sparsest solution of equation (1).
Reweighted l1-norm minimization
Due to limited accuracy of l1-norm minimization, research has been carried out to look for some methods comparable to l1-norm minimization or which can outperform l1-norm method. Numerical results in this thesis and also from some other related work like in [5], it has been shown that in most situations reweighted l1-minimization outperforms l1-norm method. Reweighted l1 is basically a weighted some of series of l1-norm minimization. The idea is to assign some weights to iteration 1, then do the l1-norm minimization using those weights, and then use the solution to form new weights for the next iteration. Weights are used to reduce the small components as small as possible by minimizing the weighted l1-norm. Complete algorithm as mentioned in [5] is as below.
SPICE and LIKES
As we have seen that for our sudoku case the solution is the sparsest solution of equation (1). Therefore we have opened a door to exploit different methods to get to the solution of sudoku using the literature in sparse parameter estimation. One of the methods we have presented is l1-norm minimization and its iterative version. Here we are introducing another two methods of sparse parameter estimation and their results on sudoku. One is Sparse Iterative Covariance-based Estimation (SPICE) and the other one is Likelihood-based Estimation of Sparse parameters (LIKES). For more details of these methods we refer to [2]. Here we will modify the problem presented in [2] to suit sudoku problem presented in equation (1).
SPICE
In sparse parameter estimation most of the methods are parameter based methods which depends on selection of one or more user parameters which is usually a difficult task. SPICE is a newly proposed method in [2] which does not have this drawback. SPICE is based on statistically sound covariance based selection criteria. It is shown in [2] that SPICE has more accuracy than l1-norm. Here in this report we are implementing SPICE and will present the results based on sudoku problem.
We used same initialization as in SPICE depicted in equation (4) and the weights are then updated adaptively as in equation (5).
Due to the maximum likelihood characteristics of LIKES and also the fact that it uses adaptive weights it has been shown in [2] that the LIKES estimates are more accurate than the SPICE. We have used this algorithm for sudoku and our results also show the improvements in LIKES accuracy.
Semi Definite Relaxation
In recent years Semi Definite Relaxation (SDR) got interest in signal processing as a very powerful computationally efficient optimization technique for very difficult optimization problems in particular to none convex quadratically constraint quadratic programs [3]. In this section we are going to reformulate our sudoku problem in such a way that it becomes SDR problem. So let us write our problem of interest in a different way.
Iterative Adaptive Approach based on Weighted Least Squares
As we are using sparse signal processing techniques to solve the sudoku problem, here we are using another technique from sparse signal literature. The one of the challenge of sparse signal processing techniques is they are much computationally heavy and slow, and the techniques we have discussed earlier also have varying computational complexity for our sudoku case. Here we tried another method which we assumed should work faster. The method we are going to use is from the literature of array processing and is described in [4]. Here we will explain a bit about the mathematical model of the algorithm and then we will use it for our sudoku case.
Array processing is used for sensing location and waveforms of the sources by combining the signal from different sources using a set of sensors separated spatially (e.g. an array of antennas). The goal is to enhance the signal by combining received signals in such a way that the noises and other unwanted interferences are suppressed. If the environment in which the sources are sensed is stationary only for a limited time, then the number of snapshots available can be very few which makes it challenging to localize the sources and also the near field sources which are near to each other, spatially and with respect to Doppler shifts, are difficult to discriminate. In [4], it is described that the array processing scenario can be modeled in sparse signal representation as the number of actual sources is much smaller than the number of potential source points that are considered. Moreover sparse based techniques can work for as low snapshot as one, so it is considered to solve the array processing localization using sparse signal based technique which they named as Iterative Adaptive Approach for Amplitude and Phase EStimation (IAA-APES).
Sinkhorn Balancing
A slightly different algorithm than those we have already discussed is presented in [7]. The problem formulation for Sinkhorn balancing algorithm is different from those presented earlier. Sinkhorn balancing computes the solution based on a probabilistic representation of the sudoku puzzle. Sinkhorn balancing is presumed to solve all puzzles but the most difficult ones. We have included this method in our report to compare its results to our presented methods for the sudoku. For more details of this algorithm we refer to [7]. Here we only present the results of this algorithm on our Sudoku puzzles. We are thankful to Todd Moon for providing us the matlab code for Sinkhorn balancing.
Table of contents :
1 Introduction
1.1 Aim of the Project
1.2 Methodology
1.3 Report Organization
1.4 Related Work
2 Sudoku as a Linear System
2.1 Problem Formulation
3 Algorithms’ Description
3.1 L1-Norm minimization
3.2 Reweighted l1-norm minimization
3.2.1 Algorithm
3.3 SPICE and LIKES
3.3.1 SPICE
3.3.1.1 Problem formulation
3.3.1.2 Algorithm
3.3.2 LIKES
3.3.2.1 Algorithm
3.4 Semi Definite Relaxation
3.5 Iterative Adaptive Approach based on Weighted Least Squares
3.5.1 Data Model
3.5.2 IAA-APES
3.5.2.1 Algorithm
3.6 Sinkhorn Balancing
4 Puzzles
5 Results and Discussions
5.1 Puzzle Set 1
5.2 Puzzle Set 2
5.3 Puzzle Set 3
6 Conclusion and Future Work
7 References
Appendix