Article Versions
Export Article
Cite this article
  • Normal Style
  • MLA Style
  • APA Style
  • Chicago Style
Research Article
Open Access Peer-reviewed

Sensitivity Analysis in Linear Fractional Programming with Optimality Condition

Ladji Kané , Moussa Konaté, Moumouni Diallo, Lassina Diabaté
American Journal of Modeling and Optimization. 2020, 8(1), 7-14. DOI: 10.12691/ajmo-8-1-2
Received August 13, 2020; Revised September 16, 2020; Accepted September 25, 2020

Abstract

In this paper, an overview of theoretical and methodological issues in simplex method-based sensitivity analysis is proposed. The paper focuses somewhat on developing shortcut methods to perform Linear Fractional Programming (LFP) sensitivity analysis manually and in particular changes in the parameter of the LFP model. Shortcut methods for conducting sensitivity analysis have been suggested. Simple examples are given to illustrate this proposed method.

1. Introduction

Many managerial decisions hinge on the issue of how to make the most of the company’s resources of raw material, manpower, time, and facilities. LFP is a technique that aims at optimizing performance regarding combinations of resources. LFP can offer managers the capability of building scenarios through its extensive “what if” analysis and sensitivity analysis facilities. While most practical LFP problems would require a very long time to solve manually.

When we are dealing with sensitivity analysis, we are initially looking into changes might happen to the parameters of LFP model. These possible changes would imply to investigate the changes in Right-Hand-Side of the model constraints and the coefficient of the objective function 1, 2. Once the optimal solution to an LFP problem has been achieved using the Simplex algorithm, it may be desirable to study how current optimal solution stays optimal when one or more of the problem parameters may change. It is crucial to figuring out how sensitive the optimal solution is to some changes in the model parameters 1, 2. Sensitivity analysis, (post-optimality), therefore, looks at “what if” questions scenarios. What happens to the cash position, for example, if sales fall by 5%? What happens if primary supplier increases raw material prices by 12%?

When we deal with practical problems, sensitivity analysis is much more important than the result obtained from the optimal solution. Such an analysis transforms the LFP solution into a valuable tool to study the effect of changing conditions such as in management, business, and industry. When we include the organization’s business plan with the sensitivity analysis report, it will show that we have thought about some of the potential risks - and that is halfway to avoiding them. Sensitivity analysis can help in making proper decisions. For example, if we may want to consider, the effect of increased labor force or decrease overhead charges, or reducing capacities, due to over-optimistic forecasts, what effect of these actions on counteracting competitors.

2. Literature Review

The literature on sensitivity analysis is enormous and diverse. In late 1980's and early 1990's several researchers and scientists were involved in the fields of operations research working on the L.P. sensitivity analysis topic. Some significant advances were produced in L.P. sensitivity analysis and related problems. The research in the field of sensitivity analysis was extensively carried out by many operational research specialists. It includes 5, 9, 13, 16, 17, 18 worked on sensitivity analysis parameter but excluded the simultaneous changes in the LP parameter.

3 studied the sensitivity analysis for the parameters of structured problems. Khan et al. (2011) studied the profit in products by using LP techniques and sensitivity analysis.

However, the existing literature concerning our research scope is limited. Most of the previous work on sensitivity analysis were focused on lengthy methodologies and procedures that consume a significant amount of time to arrive at a solution of sensitivity analysis. 21 introduced two kinds of sensitivity analysis. First one is defining the properties of sensitivity region while the second one is the positive sensitivity analysis.

More recent works on linear fractional programming theory and methods can be found in 1, 2. The suggested method in this paper depends mainly on the updated method in iterative manner then the optimality condition for a given basic feasible solution of (LFP) is defined.

3. Research Objectives

Calculations the ranges for optimality and feasibility in sensitivity analysis are covered in most operations research books and most quantitative textbooks. The methods used vary from one book to another although all of them achieve the same results. These methods may take a very long time and effort to solve the sensitivity issues manually. Many of these methods have the tendency to involve lengthy mathematical approaches that they require students to be well-equipped with advanced mathematical techniques such as matrices and vectors. Some other methods need a longer routine to get the results. It is much practical to use lighter and sounder methods to obtain the sensitivity analysis results with ease and with less time. Thus, the paper will explain how the shortcut methods can be used to derive the sensitivity analysis results.

4. Materials and Methods

This part is devoted to the study of the simplex method. This method is the main tool for solving linear programming problems. It consists of following a certain number of stages before obtaining the solution of a given problem. It is an iterative algebraic method which allows to find the exact solution of a linear programming problem in a finite number of steps.

4.1. Mathematical Formulation of LFP

A general problem of linear fractional programming can be formulated as follows: find the values of variables satisfying inequalities or linear equations (constraints) of the form:

each constraint may have a different sign of inequality. In addition, the variables must be non-negative, that is to say (constraints of non-negativity) and must Maximize or Minimize a linear form (function objective) such as:

where and are known reals and

The matrix form makes it possible to represent a problem of linear fractional programming in a more concise form.

Objective function to Maximize or Minimize

Subject to and

where we have the following matrices:

4.2. Simplex Table and Iteration Procedure

When applying the simplex method by hand, it is best to work with a table that contains all the necessary data. Each iteration corresponds to a new table taking the following form:


4.2.1. Simplex Table:

Consider the linear fractional program (LFP):

subject to and

We are going to transform the inequalities encountered into equality. This transformation is done simply by introducing non-negative variables (which verify the constraints of non-negativity) called slack variables. If the constraints are of the type: we introduce a slack variable (slack variable for i-th contraint) and write the canonical form:

So we have

Subject to

In the iteration or in the s-th table called the simplex table

In the simplex table and for we have: the basic variables column is and nonbasic variables is the matrix of the coefficients of basic variables in is: the matrix of the coefficients of basic variables in is: the solution matrix is the matrices of each column of the table are

and the opportunity and marginal costs of each activity and The values of the functions and are: and Moreover and


4.2.2. Iteration Procedure: Simplex Algorithm

Consider the following problem (LFP):

subjetc to and with

Simplex Algorithm (Maximization Form)

STEP (0) The problem is initially in canonical form and all and construct the initial table of the simplex

STEP (1) If for then stop; we are optimal If we continue then there exists some

STEP (2) Choose the column to pivot in (i.e., the variable to introduce into the basis) by

If for then stop; the primal problem is unbounded.

If we continue, then for some

STEP (3) Choose row to pivot in (i.e., the variable to drop from the basis) by the ratio test:

STEP (4) Replace the basic variable in row with variable and re-establish the canonical form (i.e., pivot on the coefficient ).

STEP (5) do

STEP (6) Go to step (1).

These steps are the essential computations of the simplex method.

Optimal solution:

If is optimal, then the current basis is and the corresponding solution is Moreover, the current nonbasic variables is and the corresponding solution is Hence the optimal solution to the problem can be written as with the associated value of the objective function

Example: Simplex algorithm

Consider the following linear fractional program:

Subject to

To convert these inequality constraints to equalities, we add slack variables and to the left side of the inequality. The constraints becomes

Because slack variables represent unused resources (such as time on a machine or labor-hours available), they yield no profit, but we must add them to the objective function with zero profit coefficients.

Thus, the objective function becomes

After the addition of slack variables and the initial table can be written as:

contains the problem formulation, which is in canonical form with and as basic variables and and as nonbasic variables at value zero. is not optimal because Hence now the table is transformed and we obtain the table

contains the problem formulation, which is in canonical form with and as basic variables and and as nonbasic variables at value zero. is not optimal because Hence now the table is transformed and we obtain the table

The table is optimal because and Hence, thus the current basis is and the corresponding solution Moreover, the nonbasic variables and the corresponding solution Hence the optimal solution to the problem can be written as with the associated value of the objective function

5. Results and Discussion

To apply Sensitivity analysis of LFP problems, the optimal solution of Simplex method must be available. The essence of the sensitivity analysis is to examine how marginal changes in the parameter of the problem might affect the derived optimal solution. The most taught topics of sensitivity analysis at academic institutes comprise the following items:

1) Changes in the objective function Coefficients and

2) Changes in the Right-Hand-Side values of the constraints

3) Changes in the Right-Hand-Side values of the constraints

4) Adding a new decision variable

As mentioned earlier, it is expected that the methods of performing sensitivity analysis taught in educational institutes should be easy to apply and short in procedures. In this paper, the author has developed and implemented simple methods for calculating sensitivity analysis that the author has used in teaching Operations Research courses for his long careers, in education. Demonstrations of these methods will be presented below.

5.1. Changes in the Right-Hand-Side Values of the Constraints

If we replace by The optimal solution will remain optimal if and only if

Example: Changes in the Right-Hand-Side values of the constraints into

The optimal solution will remain optimal if and only if

5.2. Changes in the Objective Function Coefficients or

Case 1: Changes in the objective function Coefficients into with

The optimal solution will remain optimal if and only if

for all the and the number of the line in Table optimal.

Example: Changes in the objective function Coefficients into with

In Table optimal: and

The optimal solution will remain optimal if and only if

Case 2: Changes in the objective function Coefficients into with

The optimal solution will remain optimal if and only if

for all the and the number of the line in Table optimal.

Example: Changes in the objective function Coefficients into with

In Table optimal: and

The optimal solution will remain optimal if and only if

Case 3: Changes in the objective function Coefficients into with

The optimal solution will remain optimal if and only if

Example: Changes in the objective function Coefficients into

In Table optimal: and

The optimal solution will remain optimal if and only if

Case 4: Changes in the objective function Coefficients into with

The optimal solution will remain optimal if and only if

Example: Changes in the objective function Coefficients into

In Table optimal: and

The optimal solution will remain optimal if and only if

5.3. Changes in the Left-Hand-Side Values of the Constraints into

Let the new column matrix The optimal solution will remain optimal if and only if

Example: Changes in the Left-Hand-Side values of the constraints into

In Table optimal: and

The optimal solution will remain optimal if and only if

5.4. Adding a New Decision Variable

Let the new matrix

coefficients of the variable

For we have:

and and for we have:

and The optimal solution will remain optimal if and only if

else if

the optimal solution will not remain optimal.

Example 1: The optimal solution will remain optimal if and only if

Adding a new decision variable with

By inserting the new decision variable the problem becomes:

Subject to

By inserting the following slack variables and in the constraints, we will have:

Subject to

The optimal solution will remain optimal if and only if

For and we have

For we have and

For we have Therefore,

So the activity (or the addition of a new decision variable ) is not profitable.

Example 2: The optimal solution will remain optimal if and only if

Adding a new decision variable with

By inserting the new decision variable the problem becomes:

Subject to

By inserting the following slack variables and in the constraints, we will have:

Subject to

and

So

In the table we will insert the new column and we rewrite into

then the optimal is destabilized. Let's apply the simplex again to find the new optimal base or the new optimal solution. After two iterations, we see that the table is optimal. The new optimal solution is: and Moreover

with and

6. Conclusions

Shortcut methods were presented in this paper to produce sensitivity analysis of linear fractional programming models. Four different topics on sensitivity were taken into account: changes in the model parameters, i.e., changes on objective function coefficients, Changes in the Right-Hand-Side values of the constraints, Changes in the Left-Hand-Side values of the constraints, and Adding a new decision variable.

The analysis has suggested a few shortcut methods to perform the sensitivity analysis that can be used in operations research and quantitative methods textbooks to be taught in educational institutes. It is very straightforward and less time is demanding to apply compared to current methods used by leading books around the world.

This research is a significant contribution in the sense that it will assist the management and business students at different universities in making correct decisions by using very short and easy-to-calculate methods concerning the sensitivity analysis of linear fractional programming problems.

The authors is planning to carry out research to analyze few well-known software in OR to support the investigation of such issues.

References

[1]  E.B. Bajalinove, Linear Fractional Programming: Theory, Methods, Applications and Software, Kluwer Academic Publishers, 2003.
In article      View Article
 
[2]  E.B. Bajalinove, A. Tangian, Adjusting objective function to a given optimal solution in linear and linear fractional programming, in: A. Tangian, J. Gruber (Eds.), Constructing and Applying Objective Functions, in: Lecture Notes in Economics and Mathematical Systems, 510, Springer, 2001.
In article      View Article
 
[3]  Arsham, H. (1992). Post optimality analysis of the transportation problem. The Journal of the Operational Research Society, 43(2), 121-139.
In article      View Article
 
[4]  Anderson, D. R., Sweeney, D. J., Williams, T. A., & Wisniewski, M. (2009). An introduction to management science: quantitative approaches to decision making. South-Western CENGAGE Learning UK.
In article      
 
[5]  Bazaraa, M., & Jarvis, J. (1990). Linear Programming and Network Flows. New York: Wiley.
In article      
 
[6]  Baird, B. F. (1990). Managerial Decisions Under Uncertainty. An Introduction to the Analysis of Decision Making. New York, USA: Wiley.
In article      
 
[7]  Bradley, S., Hax, A., & Magnanti, T. (1977). Applied Mathematical Programming. Reading, MA: Addison-Wesley.
In article      
 
[8]  Bianchi, C., & Calzolari, G. (1981). A simulation approaches to some dynamic properties of econometric models. In: Mathematical Programming and its Economic Applications, 607-21.
In article      
 
[9]  Clemson, B., Tang, Y., Pyne, J., & Unal, R. (1995). Efficient methods for sensitivity analysis. Syste Dynamics Review, 11(1), 31-49.
In article      View Article
 
[10]  Dantzig, G. B. (1963). Linear Programming and Extensions. A report prepared for United States Air Force Projected Rand. Retrieved from https://www.rand.org/content/dam/ rand/pubs/reports/2007/R366part1.pdf.
In article      
 
[11]  Dantzig, G. B. (1978). Are dual variables prices? If not, how to make them more so. Technical report. Systems Optimization Laboratory, Department of Operations Research Stanford University.USA
In article      View Article
 
[12]  Eschenbach, T. G. & McKeague, L. S. (1989). Exposition on using graphs for sensitivity analysis. The Engineering Economist, 34(4), 315-333.
In article      View Article
 
[13]  Gal, T. (1979). Postoptimal Analysis, Parametric Programming, and Related Topics. New York, USA: McGraw-Hill.
In article      
 
[14]  Gal, T., & Greenberg, H. J. (1997). Advances in sensitivity analysis and parametric programming. Boston: Kluwer.
In article      View Article
 
[15]  Gass, S. (1985). Linear Programming: Methods and Applications, 5th ed. New York: McGraw-Hill
In article      
 
[16]  Hamby, D. M. (1994). A review of techniques for parameter sensitivity analysis of environmental models. Environmental Monitoring and Assessment Journal, 32, 135-154.
In article      View Article  PubMed
 
[17]  Khan, U. I., Bajuri, H. N., & Jadoon, A. I. (2008). Optimal production planning for ICI Pakistan using linear programming and sensitivity analysis. International Journal of Business and Social Science, 2(23), 206-212.
In article      
 
[18]  Luenberger, D. (1984). Linear and Nonlinear Programming, 2d ed. Reading, Mass: Addison-Wesley.
In article      
 
[19]  Murty, K. (1983). Linear Programming. New York: Wiley.
In article      
 
[20]  Murty, K. (1995). Operations Research: Deterministic Optimization Models 1st Edition. Prentice Hall.
In article      
 
[21]  Taha, H. (2010). Operations Research: An Introduction, 9th Edition. USA.
In article      
 
[22]  Yang, B. H. (1990). A study on sensitivity analysis for a non-extreme optimal solution in linear programming (Ph.D. Thesis). Seoul National University, Republic of Korea.
In article      
 

Published with license by Science and Education Publishing, Copyright © 2020 Ladji Kané, Moussa Konaté, Moumouni Diallo and Lassina Diabaté

Creative CommonsThis work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/

Cite this article:

Normal Style
Ladji Kané, Moussa Konaté, Moumouni Diallo, Lassina Diabaté. Sensitivity Analysis in Linear Fractional Programming with Optimality Condition. American Journal of Modeling and Optimization. Vol. 8, No. 1, 2020, pp 7-14. https://pubs.sciepub.com/ajmo/8/1/2
MLA Style
Kané, Ladji, et al. "Sensitivity Analysis in Linear Fractional Programming with Optimality Condition." American Journal of Modeling and Optimization 8.1 (2020): 7-14.
APA Style
Kané, L. , Konaté, M. , Diallo, M. , & Diabaté, L. (2020). Sensitivity Analysis in Linear Fractional Programming with Optimality Condition. American Journal of Modeling and Optimization, 8(1), 7-14.
Chicago Style
Kané, Ladji, Moussa Konaté, Moumouni Diallo, and Lassina Diabaté. "Sensitivity Analysis in Linear Fractional Programming with Optimality Condition." American Journal of Modeling and Optimization 8, no. 1 (2020): 7-14.
Share
[1]  E.B. Bajalinove, Linear Fractional Programming: Theory, Methods, Applications and Software, Kluwer Academic Publishers, 2003.
In article      View Article
 
[2]  E.B. Bajalinove, A. Tangian, Adjusting objective function to a given optimal solution in linear and linear fractional programming, in: A. Tangian, J. Gruber (Eds.), Constructing and Applying Objective Functions, in: Lecture Notes in Economics and Mathematical Systems, 510, Springer, 2001.
In article      View Article
 
[3]  Arsham, H. (1992). Post optimality analysis of the transportation problem. The Journal of the Operational Research Society, 43(2), 121-139.
In article      View Article
 
[4]  Anderson, D. R., Sweeney, D. J., Williams, T. A., & Wisniewski, M. (2009). An introduction to management science: quantitative approaches to decision making. South-Western CENGAGE Learning UK.
In article      
 
[5]  Bazaraa, M., & Jarvis, J. (1990). Linear Programming and Network Flows. New York: Wiley.
In article      
 
[6]  Baird, B. F. (1990). Managerial Decisions Under Uncertainty. An Introduction to the Analysis of Decision Making. New York, USA: Wiley.
In article      
 
[7]  Bradley, S., Hax, A., & Magnanti, T. (1977). Applied Mathematical Programming. Reading, MA: Addison-Wesley.
In article      
 
[8]  Bianchi, C., & Calzolari, G. (1981). A simulation approaches to some dynamic properties of econometric models. In: Mathematical Programming and its Economic Applications, 607-21.
In article      
 
[9]  Clemson, B., Tang, Y., Pyne, J., & Unal, R. (1995). Efficient methods for sensitivity analysis. Syste Dynamics Review, 11(1), 31-49.
In article      View Article
 
[10]  Dantzig, G. B. (1963). Linear Programming and Extensions. A report prepared for United States Air Force Projected Rand. Retrieved from https://www.rand.org/content/dam/ rand/pubs/reports/2007/R366part1.pdf.
In article      
 
[11]  Dantzig, G. B. (1978). Are dual variables prices? If not, how to make them more so. Technical report. Systems Optimization Laboratory, Department of Operations Research Stanford University.USA
In article      View Article
 
[12]  Eschenbach, T. G. & McKeague, L. S. (1989). Exposition on using graphs for sensitivity analysis. The Engineering Economist, 34(4), 315-333.
In article      View Article
 
[13]  Gal, T. (1979). Postoptimal Analysis, Parametric Programming, and Related Topics. New York, USA: McGraw-Hill.
In article      
 
[14]  Gal, T., & Greenberg, H. J. (1997). Advances in sensitivity analysis and parametric programming. Boston: Kluwer.
In article      View Article
 
[15]  Gass, S. (1985). Linear Programming: Methods and Applications, 5th ed. New York: McGraw-Hill
In article      
 
[16]  Hamby, D. M. (1994). A review of techniques for parameter sensitivity analysis of environmental models. Environmental Monitoring and Assessment Journal, 32, 135-154.
In article      View Article  PubMed
 
[17]  Khan, U. I., Bajuri, H. N., & Jadoon, A. I. (2008). Optimal production planning for ICI Pakistan using linear programming and sensitivity analysis. International Journal of Business and Social Science, 2(23), 206-212.
In article      
 
[18]  Luenberger, D. (1984). Linear and Nonlinear Programming, 2d ed. Reading, Mass: Addison-Wesley.
In article      
 
[19]  Murty, K. (1983). Linear Programming. New York: Wiley.
In article      
 
[20]  Murty, K. (1995). Operations Research: Deterministic Optimization Models 1st Edition. Prentice Hall.
In article      
 
[21]  Taha, H. (2010). Operations Research: An Introduction, 9th Edition. USA.
In article      
 
[22]  Yang, B. H. (1990). A study on sensitivity analysis for a non-extreme optimal solution in linear programming (Ph.D. Thesis). Seoul National University, Republic of Korea.
In article