Modified SSOR Modelling for Linear Complementarity Problems
M.T. Yahyapour1, S.A. Edalatpanah1,
1Department of Mathematics, Ramsar Branch, Islamic Azad University, Ramsar, Iran
Abstract
In this paper, we present an efficient numerical method for solving the linear complementarity problems based on preconditioning strategy. Furthermore, the convergence properties of the proposed method have been analyzed and compared with symmetric successive over-relaxation (SSOR) method. Finally, some numerical experiments are illustrated to show the efficiency of the proposed method.
Keywords: linear complementarity problem, preconditioning, iterative methods, SSOR method, comparison theorems, H-matrix
Turkish Journal of Analysis and Number Theory, 2014 2 (2),
pp 47-52.
DOI: 10.12691/tjant-2-2-4
Received March 07, 2014; Revised April 10, 2014; Accepted April 17, 2014
Copyright © 2013 Science and Education Publishing. All Rights Reserved.Cite this article:
- Yahyapour, M.T., and S.A. Edalatpanah. "Modified SSOR Modelling for Linear Complementarity Problems." Turkish Journal of Analysis and Number Theory 2.2 (2014): 47-52.
- Yahyapour, M. , & Edalatpanah, S. (2014). Modified SSOR Modelling for Linear Complementarity Problems. Turkish Journal of Analysis and Number Theory, 2(2), 47-52.
- Yahyapour, M.T., and S.A. Edalatpanah. "Modified SSOR Modelling for Linear Complementarity Problems." Turkish Journal of Analysis and Number Theory 2, no. 2 (2014): 47-52.
Import into BibTeX | Import into EndNote | Import into RefMan | Import into RefWorks |
1. Introduction
For a given real vector and a given matrix the linear complementarity problem abbreviated as , consists in finding vectors such that
(1.1) |
Where, denotes the transpose of the vector . Many problems in operations research, management science, various scientific computing and engineering areas can lead to the solution of an LCP of the form (1.1). For more details (see, [1, 2, 3] and the references therein).
The early motivation for studying the LCP (M, q) was because the KKT optimality conditions for linear and quadratic programs constitute an LCP of the form (1.1). However, the study of LCP really came into prominence since the 1960s and in several decades, many methods for solving the LCP (M, q) have been introduced. Most of these methods originate from those for the system of the linear equations where these methods may be classified into two principal kinds, direct and iterative methods (see, [1, 2, 3]). Recently, various authors have suggested different model in the frame of the iterative methods for the above mentioned problem. For example, Yuan and Song in [4], based on the models in [5], proposed a class of modified accelerated over-relaxation (MAOR) methods to solve . Furthermore, when the system matrix M is an H-matrix they proposed some sufficient conditions for convergence of the MAOR and modified successive over-relaxation (MSOR) methods. Under certain conditions, Li and Dai [6], Saberi Najafi and Edalatpanah [7] and Dehghan and Hajarian [8] also studied generalized accelerated over-relaxation (GAOR) and symmetric successive over-relaxation (SSOR) methods for solving , respectively. The case that M is Non-Hermitian, Saberi Najafi and Edalatpanah [8, 9], proposed some new iterative methods for solving this class of LCP (to see that other iterative methods for see [11-17][11] and the references therein).
In this paper, we will propose a modification of SSOR method for . To accomplish of this purpose, SSOR method is coupled with the preconditioning strategy. We also show that our method for solving LCP is superior to the basic SSOR method. Numerical experiments show that the new method is feasible and efficient for solving large sparse linear complementarity problems.
2. Prerequisite
We begin with some basic notation and preliminary results which we refer to later.
(a) The matrix A = is nonnegative (positive) if aij ≥ 0 (aij > 0). In this case we write A ≥ 0(A>0).Similarly, for n-dimensional vectors x, by identifying them with n×1matrices, we can also define x ≥ 0 (x>0).
(b) The matrix A = is called a Z-matrix if for any
(c) The square matrix A = is called M-matrix if ; and ; ( we denote the spectral radius of B by ρ (B)).
(d) Z-matrix is M-matrix, if A is nonsingular, and if .
(e) For any matrix A = the comparison matrix is defined by:
(f) The Matrix A = is an H-matrix if and only if is an M-matrix.
Definition 2.2 [5]. For, vector x+ is defined such that (x+)j = max{0,xj}, j = 1,2,...,n. Then, for any, the following facts hold:
1. (x + y)+ ≤ x+ + y+,
2. x+ - y+ ≤ (x - y)+,
3. = x+ + (-x)+,
4. x ≤ y implies x+ ≤ y+.
Definition 2.3 [18, 19]. Let A be a real matrix. The splitting A=M- N is called
(a) Convergent if ρ () <1.
(b) Regular if and
(c) Weak regular if and ≥0.
(d) M-splitting if M is M-matrix and
Clearly, an M-splitting is regular and a regular splitting is weak regular.
Lemma 2. 1 [18]. Let A be a Z-matrix. Then A is M-matrix if and only if there is a positive vector x such that Ax >0.
Lemma 2.2 [18]. Let A =M − N be an M-splitting of A. Then ρ() < 1 if and only if A is M-matrix.
Lemma 2.3 [20]. Let A, B be Z-matrices and A is an M-matrix, if A≤ B then B is an M-matrix too.
Lemma2.4 [19]. If A ≥ 0, then
(1) A has a nonnegative real eigenvalue equal to its spectral radius,
(2) To > 0, there corresponds an eigenvector x ≥ 0,
(3) does not decrease when any entry of A is increased.
Lemma2.5 [18]. Let T ≥ 0. If there exist x > 0 and a scalar > 0 such that
i) , then . Moreover, if , then .
ii) , then . Moreover, if , then .
Lemma 2.6 [5]. can be equivalently transformed to a fixed-point system of equations
(2.1) |
where is some positive constant and E is a diagonal matrix with positive diagonal elements.
Lemma 2.7 [5]. Let be an H-matrix with positive diagonal elements. Then the has a unique solution .
Let the matrix M be split as;
(2.2) |
where D is diagonal, -L and -U are strictly lower and upper triangular matrices of M, respectively. Then by choosing of and in view of Lemma 2.6 we have,
(2.3) |
So, in order to solve ,SSOR iterative method is defined in [8] as follows;
(2.4) |
Also they proposed following model;
(2.5) |
where 0 < w < 2.
However, because of similarity we ignore this case.
The operator, is defined such that, where is the fixed point of the system;
(2.6) |
Let
(2.7) |
Then in next lemma we have the convergence theorem, proposed in [8] for the SSOR method.
Lemma 2.8 [8]. Let be an H-matrix with positive diagonal elements and 0 < w <2. Then, for any initial vector, the iterative sequence {} generated by the SSOR method converges to the unique solution z* of the LCP (M, q) and
3. Preconditioning SSOR for LCP
The development of efficient and authentic preconditioned iterative methods is the key for the successful application of scientific computation to the solution of many large scale Problems. Since, the convergence rate of iterative methods depends on spectral properties of the coefficient matrix, in preconditioning schemes attempt to transform the original system into another one that has the same solution but more desirable properties for iterative solution. In this section, SSOR method for LCP and the effect of preconditioning for this method is coupled. In literature, various authors have suggested different models of (I+S)-type preconditioner for linear systems A=I-L-U; where I is the identity matrix and L,U are strictly lower and strictly upper triangular matrices of A, respectively; (see, [21-28][21] and the references therein). These preconditioners have reasonable effectiveness and low construction cost. For example, Milaszewicz [21] presented the preconditioner of (I+ S)-type, where the elements of the first column below the diagonal of A eliminate. Usui et al.[22] considered the alternative preconditioner, with the following form
In the present section, we will propose preconditioning methods for solving linear complementarity problem. Generally, we rewrite M to (I+S)M. So, let M in (2.2) be nonsingular. Then preconditioning in M is;
(3.1) |
Where are diagonal, strictly lower and strictly upper triangular parts of , respectively. And,
(3.2) |
We consider Usui et al’s preconditioner [22] as (I+S). Therefore we have,
(3.3) |
Where,
Note. We can also consider other (I+S)-type preconditioners but here, we use Usui et al’s preconditioner, since convergence rate via using this preconditioner slightly is better than others; see, [24].
Now, we propose new preconditioners for M. Our preconditioners for LCP are;
(3.4) |
(3.5) |
And for i = 0, 1, 2, we have,
Thus the preconditioned SSOR method for LCP is:
(3.6) |
Lemma 3.1. Let M be an H-matrix. Then the preconditioned also is H-matrix.
Proof. Let M be an H-matrix. Then <M> is M-matrix and by Lemma 2.1,
Since
Then,
Therefore is M-matrix and the proof is completed.
Theorem 3.2. Let with positive diagonal elements be an H-matrix and is preconditioned form of M with preconditioners (3.3)-(3.5).
Then if we have,
Proof. By Lemma 3.1 is an H-matrix. Hence is M-matrix and by Lemma 2.2 . Since by Lemma 2.3 is M-matrix. As same demonstration Q is M-matrix too. Thus,
Then by Lemma 2.4 (Perron-Frobenius Theorem), there exist a positive vector x such that,
Therefore,
Furthermore, for any Pi (i=0, 1, 2) we have;
Thus, and in view of the fact that both are M-matrices, we have;
Therefore,
And by Lemma 2.5 we have;
Therefore by Lemma 2.8, the proof is completed.
Now, we show that in LCP, the convergence rate of preconditioned SSOR methods is faster than of the SSOR method.
Theorem 3.3. Let with positive diagonal elements be an H-matrix, 0 <w < 2. Also is preconditioned form of M with preconditioners (3.3)-(3.5).Then convergence rate of preconditioned SSOR methods is faster than of the SSOR method.
Proof. Let, iterative sequence {zi} i=0,1,…, generated by (3.6). From the assumption that M is an H-matrix, it follows, by Lemma 3.1 is an H-matrix and therefore by Lemma 2.7, the vector sequence{zi} is uniquely defined and the LCP(M, q) has a unique solution . Similar to (2.6),we define the operator , such that, where is the fixed point of the following system,
(3.7) |
Let
(3.8) |
By subtracting (3.7) & (3.8), we get;
Therefore, by above relations we have;
(3.9) |
Thus from the definition of the preconditioned SSOR method and (3.9) we can write
Hence, the iterative sequence {zk }, k=0,1,…, converges to if and since by Theorem 3.2, we conclude that for solving LCP, the preconditioned SSOR iterative methods is better than of the SSOR method from point of view of the convergence speed and the proof is completed.
4. Numerical Results
In this section, we give examples to illustrate the results obtained in previous Sections. These examples computed with MATLAB7 on a personal computer Pentium 4.
Example 4.1. Consider LCP (M, q) with following system and
where and denotes the Kronecker product. Also G and F are tridiagonal
Matrices given by;
Evidently, M is an H-matrix with positive diagonal elements. Then LCP (M, q) has a unique solution. Then, we solved the n3 × n3 H-matrix yielded by the iterative methods, and Preconditioned forms.
In this experiment, we choose Usui et al.’s model and our models (P1,P2) as preconditioners. The initial approximation of is and as a stopping criterion we choose;
In Table 1, with several values we report the CPU time (CPU) and number of iterations (Iter) for the corresponding SSOR and preconditioned SSOR methods (when N=1000). Here, the preconditioned SSOR method with Usui et al.’s preconditioner is denoted by PREC(USUI), while PREC(P1), PREC(P2) corresponds to our preconditioners (Pi); i=1,2.
From the table, we can see that the preconditioned iterative methods are superior to the basic iterative methods and our preconditioners are better than Usui et al.’s preconditioner.
Example 4.2. Consider LCP (M, q) Where
It is easy to see that, M is an H-matrix with positive diagonal elements. The spectral radius of the corresponding iterative matrix with different parameters w and above preconditioners(P0,P1,P2) for N=500 in semi-logarithmic scale is given in the Figure 1. From the following figures, we can see that the preconditioned SSOR methods are superior to the classic SSOR.
Furthermore, In Table 2, we report the CPU time of iterative methods for different values of n(when w=1).
5. Conclusion
In this paper, we have proposed the preconditioned SSOR method for linear complementarity problem and analyzed the convergence for these methods under certain conditions. We have also studied how the iterative method for LCP is affected if the system is preconditioned by our models.
Acknowledgements
The authors would like to thank Ramsar Branch of Islamic Azad University for the financial support of this research, which is based on a research project contract.
References
[1] | Murty, K.G. (1988). Linear Complementarity, Linear and Nonlinear Programming. Heldermann Verlag: Berlin. | ||
In article | |||
[2] | Cottle, R.W., Pang, J.S.&Stone, R.E. (1992). The Linear Complementarity Problem. Academic Press: New York. | ||
In article | |||
[3] | Bazaraa, M.S., Sherali, H.D.& Shetty CM. (2006). Nonlinear programming, Theory and algorithms. Third edition. Hoboken, NJ: Wiley-Interscience. | ||
In article | CrossRef | ||
[4] | Yuan, D., Song, Y.Z. (2003). Modified AOR methods for linear complementarity problem. Appl. Math. Comput. 140, 53-67. | ||
In article | CrossRef | ||
[5] | Bai, Z.Z., Evans, D.J. (1997). Matrix multisplitting relaxation methods for linear complementarity Problems. Int. J. Comput. Math. 63, 309-326. | ||
In article | CrossRef | ||
[6] | Li, Y., Dai, P.(2007). Generalized AOR methods for linear complementarity problem. Appl. Math. Comput. 188, 7-18. | ||
In article | CrossRef | ||
[7] | Saberi Najafi, H., Edalatpanah, S. A. (2013). On the Convergence Regions of Generalized AOR Methods for Linear Complementarity Problems, J. Optim. Theory. Appl. 156, 859-866. | ||
In article | CrossRef | ||
[8] | Dehghan, M., Hajarian, M. (2009). Convergence of SSOR methods for linear complementarity problems, Operations Research Letters. 37, 219-223. | ||
In article | CrossRef | ||
[9] | Saberi Najafi, H., Edalatpanah, S. A. (2012). A kind of symmetrical iterative methods to solve special class of LCP (M,q), International Journal of Applied Mathematics and Applications, 4 (2) 183-189. | ||
In article | |||
[10] | Saberi Najafi, H., Edalatpanah, S. A. (2013). SOR-like methods for non-Hermitian positive definite linear complementarity problems, Advanced Modeling and Optimization, 15(3), 697-704. | ||
In article | |||
[11] | Cvetkovic Lj., Rapajic S. (2007). How to improve MAOR method convergence area for linear complementarity problems. Appl. Math. Comput. 162, 577-584. | ||
In article | CrossRef | ||
[12] | Dong J.-L., Jiang M.-Q. (2009). A modified modulus method for symmetric positive-definite linear complementarity problems. Numer. Linear Algebra Appl. 16, 129-143. | ||
In article | CrossRef | ||
[13] | Bai Z.-Z. (2010). Modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 17, 917-933. | ||
In article | CrossRef | ||
[14] | Zhang L.-L.(2011). Two-step modulus based matrix splitting iteration method for linear complementarity problems. Numer. Algor. 57, 83-99. | ||
In article | CrossRef | ||
[15] | Cvetkovic Lj., Kostic V. (2013). A note on the convergence of the MSMAOR method for linear complementarity problems. Numer. Linear Algebra Appl. | ||
In article | CrossRef | ||
[16] | Saberi Najafi H., Edalatpanah S. A. (2013). Modification of iterative methods for solving linear complementarity problems. Engrg. Comput. 30, 910-923. | ||
In article | CrossRef | ||
[17] | Saberi Najafi H., Edalatpanah S. A.(2013). Iterative methods with analytical preconditioning technique to linear complementarity problems. RAIRO-Oper. Res. 47, 59-71. | ||
In article | CrossRef | ||
[18] | Berman, A., Plemmons, R.J. (1979). Nonnegative Matrices in the Mathematical Sciences. Academic Press: New York. | ||
In article | |||
[19] | Varga, R.S. (2000). Matrix Iterative Analysis, second ed., Berlin :Springer. | ||
In article | CrossRef | ||
[20] | Frommer, A., Szyld, D.B. (1992). H-splitting and two-stage iterative methods. Numer. Math. 63, 345-356. | ||
In article | CrossRef | ||
[21] | Milaszewicz, J.P.(1987). Improving Jacobi and Gauss–Seidel iterations. Linear Algebra Appl 93, 161-170. | ||
In article | CrossRef | ||
[22] | Usui, M., Niki, H.&Kohno, T.(1994).Adaptive Gauss Seidel method for linear systems. Intern. J. Computer Math. 51, 119-125. | ||
In article | CrossRef | ||
[23] | Hirano, H., Niki, H.(2011). Application of a preconditioning iterative method to the computation of fluid flow. Numer.Funct.Anal.And Optimiz. 22, 405-417. | ||
In article | CrossRef | ||
[24] | Li, J.C., Li., W.(2008). The Optimal Preconditioner of Strictly Diagonally Dominant Z-matrix. Acta Mathematicae Applicatae Sinica, English Series. 24, 305-312. | ||
In article | CrossRef | ||
[25] | Saberi Najafi, H., Edalatpanah, S. A.(2013). Comparison analysis for improving preconditioned SOR-type iterative method, Numerical Analysis and Applications. 6, 62-70. | ||
In article | CrossRef | ||
[26] | Saberi Najafi, H., Edalatpanah, S. A.(2013). A collection of new preconditioners for solving linear systems, Sci. Res. Essays. 8(31), 1522-1531. | ||
In article | |||
[27] | Saberi Najafi, H., Edalatpanah, S. A, Gravvanis, G.A. (2014). An efficient method for computing the inverse of arrowhead matrices. Applied Mathematics Letters, 33, 1-5. | ||
In article | CrossRef | ||
[28] | Saberi Najafi, H., Edalatpanah, S. A, Refahi Sheikhani, A. H. (2014). Convergence Analysis of Modified Iterative Methods to Solve Linear Systems, Mediterranean Journal of Mathematics. | ||
In article | |||