MULTIVARIABLE OPTIMIZATION WITH CONSTRAINTS


Department Of Maths And Statistics


PAGES: 75
FORMAT: MS WORD
Chapters: 1-5
PRICE: ₦ 5,000.00

MULTIVARIABLE OPTIMIZATION WITH CONSTRAINTS  

ABSTRACT:  

    It has been proved that in non linear programming, there are five methods of solving multivariable optimization with constraints.

    In this project, the usefulness of some of these methods (Kuhn – Tucker conditions and the Lagrange multipliers) as regards quadratic programming is unveiled.

    Also, we found out how the other methods are used in solving constrained optimizations and all these are supported with examples to aid better understanding.                      

TABLE OF CONTENTS

Title Page                                        i

Approval page                                    ii

Dedication                                        iii

Acknowledgement                                iv

Abstract                                        v

Table of Contents                                iv

CHAPTER ONE

1.0    Introduction                                1

1.1    Basic definitions                                3

1.2    Layout of work                                6

CHAPTER TWO   

2.0    Introduction                                9

2.1    Lagrange Multiplier Method                        9

2.2    Kuhn Tucker Conditions                        19

2.3    Sufficiency of the Kuhn-Tucker Conditions            24

2.4    Kuhn Tucker Theorems                        30

2.5    Definitions – Maximum and minimum of a function        34

2.6    Summary                                    38

CHAPTER THREE

3.0    Introduction                                39

3.1    Newton Raphson Method                        39

3.2    Penalty Function                            53

3.3    Method of Feasible Directions                57

3.4    Summary                                67

CHAPTER FOUR

4.0    Introduction                            68   

4.1    Definition – Quadratic Programming            69

4.2    General Quadratic Problems                    70

4.3    Methods                                75   

4.4    Ways/Procedures of Obtaining the optimal

The solution from the Kuhn-Tucker Conditions

method                                76

4.4.1    The Two-Phase Method                    76   

4.4.2    The Elimination Method                    77   

4.5    Summary                                117   

CHAPTER FIVE

Conclusion                                    118

References                                    120                           CHAPTER ONE

1.0    INTRODUCTION

There are two types of optimization problems:

    Type 1

    Minimize or maximize          Z = f(x)            (1)

                XE Rn

    Type 2

    Minimize or maximize     Z  =  f(x)            (2)

            Subject to      g(x)   ~ bi,  i, = 1, 2, -----, m   (3)

        where x E Rn

    and for each i, ~ can be either <, > or =.

    Type 1 is called unconstrained optimization problem.  It has an objective function without constraints. The methods used in solving such problem are the direct search methods and the gradient method (steepest ascent method).

    In this project, we shall be concerned with optimization problems with constraints.

    The type 2 is called the constrained optimization problem.  It has an objective function and constraints.  The constraints can either be equality (=) or inequality constraints (<, >).

    Moreover, in optimization problems with inequality constraints, the non-negativity conditions, X >0 are part of the constraints.

    Also, at least one of the functions f(x) and g(x) is non linear and all the functions are continuously differentiable.

    There are five methods of solving the constrained multivariable optimization.  These are:

1.    The Lagrange multiplier method.

2.    The Kuhn-Tucker conditions

3.    Gradient methods

a.    Newton-Raphson method

b.    Penalty function

4.    Method of feasible directions.

The Lagrange multiplier method is used in solving optimization problems with equality constraints, while the Kuhn-Tucker conditions are used in solving optimization problems with inequality constraints, though they play a major role in a type of constrained multivariable optimization called “Quadratic programming”.

The gradient methods include:

The Newton-Raphson method and the penalty function.  They are used in solving optimization problems with equality constraints while the method of feasible directions are used in solving problems with inequality constraints.

BASIC DEFINITIONS

1.    NEGATIVE DEFINITE:

The quadratic form XT Rx is negative definite if (-1)i+1 Ri<0, i = 1(1)m.

Using (-1)i+1 Ri<0.

When i = 1  à  (-12 R1     i = 2 à (-1)3 R2 < 0  à  R2 < 0: R2 > 0

    i = 3 à (-1)4 R3 < 0   à  R3 < 0

  R1 < 0, R2 > 0, R3 < 0, R4 > 0, -------

2.    NEGATIVE SEMI-DEFINITE

The quadratic form XT Rx is negative semi-definite if (-1)i+1 Ri < 0 and at least one (-1)i+1 Ri ¹ 0

3.    POSITIVE DEFINITE

The quadratic form XT Rx is positive definite if Ri > 0, i = 1 (1)m.

        Example:

    R  =          r11    r12    r13  - - - - - - -    r1m

            r21    r22    r23  - - - - - - -    r2m

            r31    r32    r33  - - - - - - -    r3m

            rm1    rm2    rm3  - - - - - - -    rmm

    where

        R1  =    r11     > 0

        R2  =

            r11    r12    > 0

            r21    r22

4.    POSITIVE SEMI DEFINITE

The quadratic form XT Rx is positive semi definite if Ri > 0, i = 1 (1)m and at least one Ri ¹ 0

    5.    CONVEX

The function f is convex if the matrix R positive definite.  Example is f(x).

    6.    CONCAVE

A function f is said to be concave if its negative is convex.  Example is   -f (x).

    NOTE:

Whether the objective function is convex or concave, it means the matrix is positive definite or negative definite.  When the matrix is positive definite or positive semi-definite, it should be minimized, but when it is negative definite or negative semi-definite, then it should be maximized.

LAYOUT OF WORK

    There are five chapters in this project.

Chapter two is dedicated to two methods of solving constrained optimization.  These methods are the Lagrange multiplier method and the Kuhn-Tucker conditions.  This section clearly shows how the Kuhn-Tucker conditions are derived from the Lagrange multiplier method, in an optimization problem with inequality constraints.  As part of this chapter, the global maximum, local maximum and the global minimum of an optimization problem was also derived.

Chapter three presents the gradient methods and the method of feasible directions.  The gradient methods are the Newton Raphson method and the penalty function.

The gradient methods are used in solving optimization problems with equality constraints while the method of feasible directions is used in solving optimization problems with inequality constraints.

Chapter four is specifically on a type of multivariable optimization with constraints.  This is called “Quadratic programming”.  This chapter comprises of quadratic forms, general quadratic problems and it shows the importance of two methods called the Lagrange multiplier method and the Kuhn-Tucker conditions.  This section explains how we can arrive at an optimal solution through two different methods after the Kuhn-Tucker conditions have been formed.  These are the two-phase method and the elimination method.

Chapter 5 is the concluding part of this project.

Each chapter starts with an introduction that facilitates the understanding of the section and also contains useful examples.

In conclusion, this research will make us understand the different methods of solving constrained optimization and how some of these methods are applied in quadratic programming.

.