0% found this document useful (0 votes)
12 views

Lecture Notes - MTH 208

Uploaded by

valer93979
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Lecture Notes - MTH 208

Uploaded by

valer93979
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

1

ADVANCED MATHEMATICS VIII

Contents
Theme 1: Solutions of Systems of linear equations. ......................................................................... 1
1.1 General Linear Systems .............................................................................................................. 1
1.1.2 Reduction Process: .............................................................................................................. 2
1.1.3 Gaussian Elimination: .......................................................................................................... 3
1.2 Description of the set of solutions........................................................................................... 4
1.2.1 Infinitely Many Solutions: ................................................................................................... 5
1.2.2 Back Solving:....................................................................................................................... 5
Theme 2: Matrices (Matrix) .............................................................................................................. 9
2.1 Matrix Representation of a Linear System ............................................................................. 9
2.2 Use of Augmented Matrix in Solving System of Linear Equations ..................................... 10
2.3 Echelon form ......................................................................................................................... 11
2.3.1 Reduced Echelon form ...................................................................................................... 11
2.4 Determinants ......................................................................................................................... 12
2.4.1 Determinant of (nn) Matrix ............................................................................................. 13
2.5 Elementary Operations and Determinants ............................................................................ 14
2.6 Solution to System of Linear equations by Cramer’s rule .................................................... 17
2.7 Application of Determinants ................................................................................................. 17
2.8 Inverses of Matrix: ................................................................................................................ 18
Theme 3: Numerical Methods. ........................................................................................................ 19
3.1 Determination of the Zeros of Function by Iteration ............................................................ 19
3.2 Method of Chord ................................................................................................................... 21
Theme 4: Optimization: Linear programming. ............................................................................... 24
4.1 Geometric View .................................................................................................................... 24
4.2 The Simplex method : ........................................................................................................... 25
4.3 Duality of Linear Programming ............................................................................................ 29
4.3.1 Solving Dual Programs:..................................................................................................... 30
4.4 Solving minimum programmes:............................................................................................ 32
Theme 5: Optimization- Lagrange method. .................................................................................... 34
Theme 6: Calculus of finite differences. ......................................................................................... 36

Theme 1: Solutions of Systems of linear equations.


In science, engineering and the social sciences, one of the most important and frequently occurring
mathematical problems is finding a simultaneous solution to a set of linear equations involving unknowns.
The equation x1  2 x1  x3  1 is an example of a linear equation, and x1  2, x2  1 and x3  3
is one solution for the equation.
In general, a linear equation is n-unknowns has the form
a1 x1  a2 x2   an xn  b (1)
where the coefficient a1, a2, …, an and the constant b are known and x1, x2, …, xn denote the unknowns.
A solution to (1) is any sequence s1, s2… sn of numbers such that the substitution
x1 = s1, x2 = s2, …, xn = sn satisfies the equation.
Equation (1) is called linear because each term has degree one in the variables x1 , x2 ,..., xn .
Determine which of the following are linear.
1
1) x1  2 x1 x2  3x3  4 (2) x1 2  3x2  4 (3) 3x11  sin x2  0 (4) 3x1  x2  x3  1

1.1 General Linear Systems


An (m  n) system of linear equation is the system of m linear equations in n unknown.
a11 x1  a12 x2  ...  a1n xn  b1

a21 x1  a22 x2  ...  a2 n xn  b2


(2)

am1 x1  am 2 x2  ...  amn xn  bm


2

For example, the general form of a (3  3) system of linear equation is


a11 x1  a12 x2  ...  a13 x3  b1

a21 x1  a22 x2  ...  a23 x3  b2


a31 x1  a32 x2  ...  a33 x3  b3
A solution of system (2) is a sequence s1, s2, …, sn of numbers that is simultaneously a solution for
each equation in the system.
The double subscripts notation used for the coefficients is necessary to provide an “address” for
each coefficient. For example, a32 appears in the third equation as the coefficient of x2.
Example 1: Display the system of equations with coefficients a11  2, a12  Q1 , a13  3,
a21  2, a22  2, a23  5 and with b1  1 and b2  3. Verify that x1  1, x2  0, x3  1 is a solution
for the system.
Solution: The system is: 2 x1  x2  3x3  1
2 x1  2 x2  5x3  3
Substitute x1  1, x2  0 and x3  1 gives 2 1  0  3 1  1,  2 1  2  0   5 1  3
We shall see that two processes are involved in solving the general (m  n) system (2).
1. Reduction of the system (i.e. elimination of variables).
2. Description of the set of solution.
We start with the first one and treat the second one later.

1.1.2 Reduction Process:


The aim of the reduction process is to simplify the given system by eliminating unknowns. It is very
important that the reduced system of equations have the same set of solutions as the original system. The
following theorem provides us with three operations, called elementary operations, which we may use in
reduction procedure.

Definition: Two systems of linear equations are said to be equivalent if they have same set of solutions.
Theorem 1: If one of the following elementary operations is applied to a system of linear equations the
resulting system is equivalent.
1. Interchange of two equations
2. Multiplication of an equation by a non-zero scalar.
3. Addition of a constant multiple of one equation to another equation.
To explain the operation we are carrying out we may use the following notations
Notation Elementary Operation Performed
Ei  Ej The ith equation is interchanged with jth equation
KEi The ith equation is multiplied by non zero scalar, k
Ei + KEj Add k times jth equation to the ith equation

Example 2: Use elementary operations to simplify the system


2 x2  x3  1
3x1  5 x2  5 x3  1
2 x1  4 x2  2 x3  2
Solution: 2 x2  x3  1
3x1  5x2  5x3  1
2 x1  4 x2  2 x3  2

E1  E3 : 2 x1  4 x2  2 x3  2
3x1  5 x2  5 x3  1
2 x2  x3  1
3

1
E1 : x1  2 x2  x3  1
2
3
E2  E1 :  x2  2 x3  2
2
2 x2  x3  1

x1  2 x2  x3  1
 x2  2 x3  2
E3  2 E2 :  3x3  5
Note that, Theorem I assure us that this last system of equations has the same solution set as the given
system.

1.1.3 Gaussian Elimination:


The simplest procedure of variable elimination technique is known as Gaussian elimination. We illustrate
this elimination process with the following examples.
Example 3 Use elementary row operation to solve the system
x1  2 x2  x3  2

2 x1  x2  x3  1 (3)

3x1  x2  2 x3  5

Solution: x1  2 x2  x3  2
E2  2 E1 : 5 x2  3 x3  3
 5 x2  x3  1

x1  2 x2  x3  2
5 x2  3x3  3 (4)
E3  E2  2 x3  2
Solving the last equation for x3 yields x3 = 1. Substituting for x3 in the second equation and solving for x2,
we get x2 = 0. Finally, the first equation yield x1 = 1. Because the system (4) is equivalent to the original
system (3), we have that x1 = 1, x2 = 0, x3 = 1 is the unique solution of (3).
Remark: The goal of Gaussian elimination is to reduce the given system of equation to an equivalent
system in the “triangular” form of (4).

Gaussian Elimination Process: For (m  n) system (2).


Step 1: If necessary, interchange the first equation with another so that x1 appears in the first equation
Step 2: Eliminate x1 from every equation but the first by adding the appropriate multiples of the first
equation.
Step 3: Temporarily ignoring the first equation, view the remaining equations as a system of
(m – 1) equations in the unknowns x2: x3,…,xn. Repeat the procedure on the system.

Example 4: Use Gaussian elimination to solve


2 x1  x3  3
2 x1  2 x2  x3  4
x1  x2  x3  1
Solution: E1  E3 : x1  x2  x3  1
 2 x1  2 x2  x3  4
2 x2  x3  3
x1  x2  x3  1
E2  2 E1 : 3 x3  6
2 x2  x3  3

x1  x2  x3  1
E2  E3 : 2 x2  x3  3
3x3  6
4

1 1
Solving, we get x3  2, x2  , x1  
2 2
Example: 5Use Gaussian elimination to show that the (33) system of linear equations

x1  2 x2  x3 3
 x1  4 x2  1 (5)
2 x1  4 x3  12
has no solution.

Solution: E2  E1 : x1  2 x2  x3  3
E3  2 E1 : 2 x2  x3  2
4 x2  2 x3  6
x1  2 x2  x3  3
E3  2 E2 2 x2  x3  2
0 x2  0 x3  2 (6)
Clearly, there are no values for x1, x2 and x3 that satisfy the third equation of the system (6), so both (5)
and (6) are inconsistent.
Exercises (1) Which of the following equations is linear?
(a) x1  2 x2  3 (b) x1 x2  x2  1 (c) x1  7 x2  3
2) In the following numbers, display the system and verify that the given values constitute a solution.
(a) a11,  6, a12  1, a22  3, a21  1, a22  2, a23  4, b1  14, b2  4; x1  2, x2  1, x3  1
(b) a11,  1, a12  3, a22  4, a22  1, b1  7, b2  7; b2  2, x1  1, x2  2

(c) a11,  1, a12  1, a21  3, a22  4, a31  1, a32  2, b1  0, b2  1; b3  3;

x1  1, x2  1
4. Using Gaussian elimination technique, solve the following system of equations.
(a) x1  2 x2  5 (b) x1  2 x2  4
2 x1  x2  5 2 x1  6 x2  12
(c) x1  2 x2  4 x3  6 (d) x1  x2  x3  1
x1  2 x3  2 x1  x2  x3  1
x1  3x2  7 x3  10 2 x1  3x3  8
(4) Use Gaussian elimination procedure to verify that the following systems do not have solutions
(a) x1  2 x2  3 (b) 2x1  3x2  4 (c) x1  2 x2  x3  3
 2 x1  2 x2  4 6 x1  9 x  4 x1  x2  x3  1
 2 x1  4 x2  2 x3  4
(5) Find all values of a for which the given system has no solution.
(a) x1  2 x2  5 (b) x1  2 x2  3 (c) x1  3x2  4
2 x1  ax2  4 ax1  2 x2  5 2 x1  6 x2  a

1.2 Description of the set of solutions


We have seen how to reduce a system of equations to a simpler but equivalent system. Here, we consider
the process of describing the solution et for the system.
We begin by considering the possible outcomes when solving the (22) system.
a11 x1  a12 x2  b1

a21 x1  a22 x2  b2
Geometrically, solutions to each equation are represented by lines. A simultaneous solution
corresponds to a point of intersection. Thus there are three possibilities.
1. The two lines a coincident (i.e. the same line) so there are infinitely many solutions.
2. The two lines are parallel, so there are no solutions.
3. The two lines intersect at a single point, so there is a unique solution.
5

Example
Give the geometric representation for each of the following systems of equations.
(a) x1  x2  2 (b) x1  x2  2 (c) x1  x2  3

2 x  2 x2  4 x1  x2  1 x1  x2  1
x2 x2 x2
3 3 3

2 2 2

1 1
1

1 2 3 x1 1 2 3 x1 -1 1 2 3 x1

(a) Coincident lines (b) Parallel lines (c) Intersecting lines.


Infinitely many solutions no solution Unique solution

Remark: A (mn) system of linear equations has infinitely many solutions, no solution or unique
solution.

1.2.1 Infinitely Many Solutions:


To describe infinitely many solutions, consider, for example, the (23) system. x1  x3  1
x2  x3  1 (7)
Note that attempts to simplify the system by eliminating a variable actually result in no further
simplification.
For example, suppose x3 is eliminated with the operation of E1 + E2, we have
x1  x2  0
x2  x3  1
Now, eliminate x2 from the second equation with the operation E2 – E1 to get or
x1  x2  0
 x1  x3  1
From this, we see that no progress has been made towards describing the solution. Actually, (7) as given,
is already satisfactorily reduced according to the Gaussian elimination procedure. We are unable to
eliminate another variable because the system has infinitely any solutions, and at least, one from
parameter (unconstrained or independent variable) is required in order to describe the solutions.
Fro system (7), x1  1  x3
x2  1  x3 (8)
In this case, x3 is a free parameter and specific solutions to the system are obtained by assigning values to
x3. For example, setting x3 = 1 yields the solution.
x1  2, x2  2, x3  1

1.2.2 Back Solving:


More generally, we wish to describe the set of solutions for an (mn) system of the following form:
c11 x1  c12 x2   c1m xm   c1n xn  d1
c22 x2   c2 m xm   c2 n xn  d 2

cmn xn   cmn xn  d m
The system is already reduced according to Gaussian elimination procedure. If the coefficients
c11 , c22 , , cmm are all non-zero, then we can solve system (9) relatively easily.
In the special case when m = n, the solution will be unique. The process of obtaining solution to
(9), which is called Back Solving, is in the obvious fashion. That is, if we allow xm1 , , xn being free
parameters then xm is determined from the last equation of (9) by
6

xm 
d m  cm ,m1 xm1   cmn xn 
cmm
This formula for xm is then inserted in the (m – 1) equation of system (9) to determine xm – 1.
In general, the ith equation is used to express xi in terms of the free parameters xm1 , , xn .
Example 2: Describe the solution set for (35) system.
2 x1  x2  x3  x4  2 x5  3
x2  2 x3  x4  x5  1
x3  2 x4  x5  2
Solution: Solving the third equation for x3, we get x3 = 2 – 2x4 + x5. Substituting for x3 in the second
equation, and solving for x2 yields. x2  3  5 x4  x5
Finally, we substitute for x2 and x3 in the first equation and solve for x1 to get x1  1  2 x4  x5
Thus the solution is x1  1  2 x4  x5
x2  3  5 x4  x5
x3  2  2 x4  x5
where x4 and x5 are independent variables.
For example, x4 = 0, x5 = 1, x1 = 0, x2 = 4, x3 = 3 is a solution. Also, x4 = -1, x5 = 0, x1 = -1, x2 = 8, x3 = 4 is
also a solution.
Example 3 Solve x1  2 x2  x3  2
2 x2  x2  3 x3  1
(10)
3 x1  x2  2 x3  1
Solution: Gaussian elimination proceeds as follows:
x1  2 x2  x3  2
E2  2 E1 : 5 x2  5 x3  5
E3  3E1 :  5 x2  5 x3  5

x1  2 x2  x3  2
E3  E2 : 5 x2  5 x3  5
0 x3  0 (10a)
Since the last equation (10a) is satisfied by any value of x3, we may as well delete it. Thus the given
system is equivalent to the (23) system.
x1  2 x2  x3  2

5 x2  5 x3  5 (10b)
Back solving (10b), we obtain the solution: x1   x3 , x2  1  x3
where x3 is a free parameter. For instance, x3 = 0, x2 = 1, x1 = 0 is one of the solutions and x3 = 4, x2 = -3,
x1 = -4 is another such solution, etc.
In general, (mn) linear system given in system (2), no restrictions are placed on the relative sizes of m
and n.
Hence, there may be more equations than unknowns (m > n), more unknowns than equations
(m < n), or equal number of equations and unknowns (m = n).
The following is an example of (3  4) rectangular system.
Example 4: Solve x2  x3  x4  0
x1  x2  3x3  x4  2
x1  x2  x0  x4  2 (11a )
Solution: E1  E2 : x1  x2  3x3  x4  2
x2  x3  x4  0
x1  x2  x3  x4  2
7

x1  x2  3x3  x4  2
E3  E1 : x2  x3  x4  0
2 x2  2 x3  2 x4  4

E3  2 E2 : x1  x2  3x3  x4  0
x2  x3  x4  0
 4 x3  4 x4  4 (11b)
Back solving results in : x1  2  2 x4 , x2  1, x3  1  x4 (11c)
where x4 is an independent variable.
Note: The system (11b) could just as easily, have been back-solved by expression x1, x2 and x4 in terms of
x3. In this case, the solution would be described by x1  2 x3 , x2  1, x4  1  x3 (11d )
Note also that the variables in (11a) could have been arranged so that Gaussian elimination resulted in
solving for x3 and x4 in terms of x1.
For this particular example, we will always get x2 = 1. So it would not have been possible to solve for x1,
x3 and x4 in terms of x2.
What is invariant about these solutions is that x2 = 1, and that any two of the remaining variables may be
expressed in terms of the third.
Hence, there is one independent variable and three dependent variables in any case and we can choose
whichever form of solution best suits our purposes. Consider example.

Example 5 Exhibit three solutions of system (11a) given in Example 4 such that each of the solutions
satisfies one of the following constraints: (a) x4 = 3 (b) x3 = 0 (c) x1 = 2.
Solution: The solution given in (11c) designates x4 as free variable. Setting x4 = 3, in (11c) will give x1 = -
4, x2 = 1, x3 = 2, x4 = 3 as the solution.
The solution in (11d) designates x3 as the independent variable. With x3 = 0 in (11d) we get.
x1 , x2  1, x3  0, x4  1.
Finally, if system (11b) is back-solved by expressing x2, x3 and x4 in terms of x1, the solution to system
1 1
(11a) is described by x2  1, x3   x1 , x4  1  x1 (11e)
2 2
Setting x2 = 2 in (11e) we get x1 = 2, x2 = 1, x3 = -1, x4 = 0.

Remark: We have seen examples of linear systems that have a unique solution and examples in which
there are infinitely many solutions. The only remaining possibility is the case where the system may be
inconsistent. The next examples illustrate this case.

Example 6 Solve x1  2 x2  x3  2
2 x1  x2  3x3  2
 3x1  x2  2 x3  1.1 (12a)
Solution: x1  2 x2  x3  2
E2  2 E1 5 x2  5 x3  5
E3  3E1  5 x2  5 x3  4.9
x1  2 x2  x3  2
5 x2  5 x3  5
E3  E2 0.x3  0.1
Because there is no number x3 such that 0x3 = 0.1 system (12b) is inconsistent. Because (12a) and (12b)
are equivalent, system (12a) has no solution. Consider next example.

Example 7: x1  x2  3; 2 x1  x2  6; x1  x2  1
E2  2 E1 x1  x2  3
E3  E1 3x2  0
2 x2  2
8

This system is clearly inconsistent, for the third equation required x2 = -1 whereas the second equation
requires x2 = 0.
Exercise (1) Determine whether the system has a unique, solution, no solution or infinitely many
solutions by sketching a graph for each equation.
(a) 2x  y  5 (b) 2 x  y  1 (c) 3x  2 y  6 (d ) 2 x  y  5
x  y  1 2x  y  2  6  4 y  12 x  3y  2

2) Back solve the reduced system and describe the solution set by using the procedure illustrated in
(a) x1  x2  x3  4 (b) x1  x2  x3  1
x2  x3  7 2 x2  6 x3  2

(c ) x1  x2  3x3  x4  3 (d ) x1  2 x2  3x3  x4  0
x2  x4  1 x2  x3  x4  5
x3  x4  1
3) In each of parts (a) – (c) below, exhibit a solution set for the system in Exercise 2 above that satisfies
the given constraints. (a) x3  0 (b) x3  1 (c) x3  1
(4) a  Solve 3 x1  x2  x3  x4  x5  6
x2  x3 0
x3  x4  x5  1
1. In each of part (a) – (d) below, exhibit a solution for the system in (4a) above that satisfies the
given constraints.
(i) x4  0, x3  0 (ii) x4  1, x5  1 (iii) x4  0, x5  1 (iv) x4  1, x5  0
5) Solve the systems below or state that the system is inconsistent.
(a) x1  2 x2  3 (b) x1  x2  2
2 x1  4 x2  1 3 x1  3 x2  6

(c ) x1  x2  x3  2 (d ) x1  x2  x3  1
 3 x1  3 x2  3 x2  6 2 x1  x2  7 x3  8
 x1  x2  5 x3  5
9

Theme 2: Matrices (Matrix)


Definition: An (mn) matrix is a rectangular array of objects, (usually numbers) of the form.
 a11 a12 a1n 
a a22 a2 n 
 21 (1)
 
 
 am1 am 2 amn 
Thus an (mn) matrix has a rows and n columns. The subscripts for the entry aij indicate that the number
appears in the ith row and jth column of A. For example a32 is the entry in the third row and second
column of A. We will frequently use the notation A = (aij) to denote a matrix A with entries aij.

Example 1 Display the (23) matrix A = (aij) where a11 = 6, a12 = 3, a13 = 7, a21 = 2, a22 = 1 and a23 = 4.
6 3 7 
Solution: 4 1 4
 

2.1 Matrix Representation of a Linear System


To illustrate the use of matrices to represent linear system, consider the (33) system of equations.
x1  2 x2  x3  4
2 x1  x1  x3  1 (2)
x1  x2  x1  0
If we display the coefficients and the constants for the system (2) in matrix we have
1  12 4 
B 
 2 1 1 1 
In this way, we have expressed compactly and naturally all the essential information from (2). The matrix
B is called the augmented matrix for (2).
In general, with the (mn) system of linear equations
a11 x1  a12 x2   a1n xn  b1
a21 x1  a22 x2   a2 n xn  b2
(3a)

am1 x1  am 2 x2   amn xn  bm
The (mn) matrix A = (aij) as given in (1) is called the coefficient matrix for the system (3a), and the
(mn + 1) matrix.
 a11 a12 a1n b1 
a a22 a2 n b2 
B   21
 
 
 am1 am 2 amn bm 
is called the augmented matrix for (3a). The matrix B is usually denoted by  A b where A is the
coefficient matrix of (3a) and
 b1 
b 
b 2
 
 
bm 
Example 2 Display the coefficient matrix A and the augmented matrix B for the system.
x1  2 x2  x3  2
2 x1  x2  x3  1
3x1  x2  2 x3  5
 1 2 1   1 2 1 2 
Solution : A   2 1 1 B   2 1 1 1 
 3 1 2  3 1 2 5
10

2.2 Use of Augmented Matrix in Solving System of Linear Equations


Consider this example.
Example 1 Solve x1  2 x2  x3  3x4  2
 x1  x2  3x3  2 x4  1
2 x1  7 x2  x3  9 x4  8
3x1  3x2  2 x3  4 x4  6

1 2 1 3 2  1 2 1 3 2 
 1 R2  R1 :
 1 3 2 1  0
 3 2 1 1 
Augmented Matrix:  R3  2 R1 :
2 7 1 9 8  0 3 1 3 4 
  R4  3R1 :  
3 3 2 4 6  0 3 1 5 12 

1 2 1 3 2  1 2 1 3 2 
R3  R2 :  0 3 2 1 1   0 3 2 1 1 
 R4  3R3 : 
R4  R2 :  0 0 1 2 3  0 0 1 2 3 
   
0 0 3 4 11 0 0 0 2 2 
Back solving yields the solution x1  8, x2  4, x3  5, x4  1
Definition
The following operations, performed on the rows of a matrix are called elementary row operations.
1. Interchanging two rows
2. Multiplication of a row by a non-zero scalar.
3. Addition or subtraction of multiple of one row to another.
As before, we adopt the following notation
Notation Elementary Row Operation
Ri  Rj The ith and jth rows are interchanged.
KRi The ith row is multiplied by a non-zero scalar K.
Ri + KRj Add K times jth row to the ith row.

So we can solve a linear system, by forming the augmented matrix, B, for the system and then
using elementary row operations to transform the augmented matrix to a row equivalent matrix, C, which
represents a simpler system.
Example 2 Solve: x2  x3  x4  0
x2  x2  3x3  x4  2
x1  x2  x3  x4  2
Solution: The augmented matrix for this system is
 0 1 1 1 0  1 1 3 1 2 
1 1 3 1 2   R  R :  0 1 1 1 0 
  1 2  
1 1 1 1  1 1 1 1 
 2   2 
1 1 3 1 2  1 1 3 1 2 

R3  R1 :  0 1 1 1 0   R3  2 R2 : 0 1 1 1 0 
 
0 1 2 2 4   0 0 4 4 4 
  
This is the augmented matrix for the linear system below:
x1  x2  3 x3  x4  2

x2  x3  x4  0

 4 x3  4 x4  4
The solution is found by back solving.
11

2.3 Echelon form


Definition: An (mn) matrix C is in Echelon form if
1. All rows that consist entirely of zeros are grouped together at the bottom of the matrix.
2. The first (counting from left to right) non-zero entry in the (i + 1)st row must appear in a column to
the right of the first non-zero entry in the ith row.
Example 3Determine which of the following matrices are in echelon form:
2 1 6 4 2  1 5 0 2 3 3 1 4 6 2 1 6 0 3 4 
0 1 0 2 4  0 0 6 1 
1 0 1 3 6  0 0 2 0 4 3 
A B C  D
0 0 3 1 5  0 0 0 2 4 0 2 8 15  0 0 0 0 1 1 
       
0 0 0 1 4  0 0 0 0 0 0 3 7 19  0 0 0 0 0 1 
0 5 3 1
2 6 1 5 
E 
0 0 2 1
 
0 0 0 0 
From the above definition and example, we see that the goal of the Gaussian elimination process was to
reduce an augmented matrix to echelon form.
Reduction to Echelon Form (for (mn) matrix)
Step 1: Locate the first (leftmost) column that contains a non-zero entry
Step 2: If necessary interchange the first row with another row so that the first non-zero column contains
a non-zero entry in the first row
Step 3: Add appropriate multiples of the first row to each of the succeeding rows so that the first non-zero
column has a no-zero entry in the first row.
Step 4: Temporarily ignore the first row of this matrix and repeat the process on the remaining rows.

Example 4Use the elementary row operations to find a matrix C such that C is in echelon form and is row
0 0 0 1 3 5 
 
0 1 2 1 2 2 
equivalent to B   0 2 4 5 7 14 
 
0 3 6 4 7 7 
0 9 
 0 0 2 4
0 1 2 1 2 2  0 1 2 1 2 2 
   
0 0 0 1 3 5 
R3  2 R1 :
0 0 0 1 3 5 
Solution: R1  R2  0 2 4 5 7 14   0 0 0 3 11 18 
  R  3 R1 
: 
4 1
4
0 3 6 7 7  0 0 0 1 1 
0  0 9 
 0 0 2 4 9   0 0 2 4
0 1 2 1 2 2  0 1 2 1 2 2 0 1 2 1 2 2
R3  3R2 :  0 0 0 1 3 5 
 
0 0 0 1 3 5
 
0 0 0 1 3 5

R4  2 R3 :
R4  R2 :  0 0 0 0 2 3  0 0 0 0 2 3  R5  R4 0 0 0 0 2 4  C
  R5  R3 :    
R5  2 R2 : 0 0 0 0 4 6  0 0 0 0 0 0 0 0 0 0 0 2
0 2 1 0 2  0 0 
 0 0 0  0 0 0 0  0 0 0 0

The matrix C is the echelon form and row equivalent to the matrix B.

2.3.1 Reduced Echelon form


Definition
A matrix C that is in Echelon form is in reduced echelon form provided that the first non-zero element in
each non-zero row is the only non-zero entry in its column.
For example:
12

2 0 0 1  1 2 0 1 1
 0 3 0 1 D   0 0 3 2 2 
 
0 0 1 2  0 0 0 0 0 
   
are a reduced echelon form.
We now give example to illustrate how to solve a system by transforming the augmented matrix to a row
equivalent matrix in reduced echelon form.
Example 5 Solve the system x1  x2  x3  4 x4  4
2 x1  3x2  4 x3  9 x4  16
2 x1  3x3  7 x4  11
by transforming the augmented matrix to a reduce echelon form.
1 1 1 4 4 

Solution: The augmented matrix is 2 3 4 9 16 

 2 0 3 7 11 
 
1 1 1 4 4  1 0 1 3 4  1 0 0 2 1
R2  2 R1 :   R1  R2 :    R1  R3 : 0 1 0 3 2 
0 1 2 1 8  0 1 2 1 8
R3  2 R1 :   R  2R : 
 2 
 R  2R :

 
 0 0 1 1 3 
 0 2 5 1 19 
3
 0 0 1  1 3 
2 3
 

This last matrix is in reduced echelon form and the given system is equivalent to the system.
x1  2 x4  1
x2  3x4  2
x3  x4  3
This system is easily solved in terms of x4 without back solving.
Exercises (1) Display the coefficient matrix A and the augmented matrix B for the following:
(a) x1  x2  2 x3  6 (b) x1  x2  x3  1

3x1  4 x2  x3  5 2 x1  3x2  x3  2

 x1  x2  x3  2 x1  x2  3x3  2
2) Either state that the matrix is in echelon form or use elementary row operation to transform it to
echelon form:
1 3 2 1  1 2 1 2   1 4 3 4 6 
(a)  0 1 4 2  (b)  0 2 2 3  (c)  0 2 1 3 3 
0 0 1 1  0 0 0 1  0 0 0 1 2 
    
3) Transform the echelon form of 2(a), (b), (c) into a reduced echelon form.
4) Solve the given system by transforming the augmented matrix to a reduced echelon form:
(a) x1  2 x2  0 (b) 2 x1  x2  3 x3  0 (c) x1  2 x2  4 x3  3 x4  7
2 x1  5 x2  1  2 x1  3x3  4 x1  2 x2  6 x3  6 x4  10
x1  x2  3 6 x1  x2  14 2 x1  4 x2  6 x3  6 x4  13

2.4 Determinants
 
Definition: Let A  aij be a (22) matrix. The determinant of A is given by
det  A   a11 a22  a12 a21
For notational purposes, the determinant is often expressed by using vertical bars
a11 a12
det  A 
a21 a22
Example 1 Find the determinants of the following matrices
 1 2  4 1  3 4
A  B  C  
 1 3   2 1 6 8
13

1 2
Solution: det  A    1.3   1  2  5
1 3
4 1 3 4
det  B    4.1  2  1  2. det  C    3.8  6.4  0
2 1 6 8

2.4.1 Determinant of (nn) Matrix


 
Definition: Let A  aij be an (nn) matrix and let Mrc denote the [(n – 1)  (n – 1)] matrix obtained by
deleting the rth rows and the cth column from A. Then Mrc is called a minor matrix of A and the number
det (Mrc) is the minor of the (r, c)th entry arc. In addition, the numbers
Aij   1 det  M ij 
i j

are called cofactors (or signed minors).

Example 2 Determine the minor matrices M11, M23, M32 for the matrix A given by
 1 1 2 
A   2 3 3 
4 5 1 
 
Also, calculate the cofactors A11, A23 and A32.

Solution: Deleting row 1 and column 1 we obtain M11


 3 3   1 1 1 2 
M 11   . Similarly, M 23    M 32   
5 1  4 5   2 3 
The associated cofactors Aij   1 det  M ij  are given by
i j

3 3 23 1 2 3 2 1 2
A11   1  3  15  18, A23   1    5  4   9, A32   1    3  4   7
11

5 1 4 5 2 3
 
Definition: Let A aij be an (nn) matrix. Then the determinant of A is
det  A   a11 A11  a12 A12   a1n A1n
Where Aij is the cofactors of a1 j ,1  j  n

3 2 1 
Example 3Compute det (A) where A  2 1 3
 
 
4 0 1 
 
1 3 2 3 2 1
Solution: det A  a11 A11  a12 A12  a 13 A13  3 2 1  3 1  2 14   1 4   29
0 1 4 1 4 0
 1 2 0 2
 1 2 3 1 
Example 4 Compute det (A) where A   
 3 2 1 0 
 
 2 3 2 1 
Solution: det  A   a11 A11  a12 A12  a13 A13  a14 A14  A11 2 A12  2 A14
The required cofactors A11, A12 and A14 are calculated as ain example 1 above.
2 3 1
1 0 2 0 2 1
A11  2 1 0  2 3 1  15
2 1 3 1 3 2
3 2 1
14

1 3 1
1 0 3 0 3 1
A12  3 1 0    1 3 1  18
2 1 2 1 2 2
2 2 1

1 2 3
2 1 3 2 3 2
A14  3 2 1    1 2 3  6
3 2 2 2 2 3
2 3 2
Thus, it follows that det  A   A11 2 A12  2 A14  15  36  12  63
3 0 0 0
1 2 0 0 
Example 5 Compute the determinant of the lower triangle matrix T where T  
2 3 2 0
 
1 4 5 1
Solution: We have det T   t11T11  t12T12  t13T13  t14T14
2 0 0
2 0
det T   t11 A11  3 3 2 0  3.2  3.2.2.1  12
5 1
4 5 1
Here, we saw that the determinant of the lower triangular matrix T was the product of the diagonal entries.
i.e. det(T )  t11 xt22 xt33 xt44
This simple relationship is valid for any lower triangular matrix.

 
Theorem: Let T  tij be an (nn) lower triangular matrix. Then
det T   t11 xt22 xt33 x xtmn

Theorem: Id the (nn) matrix A is non singular, then det A  0. Moreover det  A1  
1
det A
Exercise: Find det (A) where A are matrices defined below:
 2 1 1 2 2 0 2 0
 1 2 1 1 4 0 3 0 0 1  1 2 
1. A   0 1 3    3 1
2. A   1 0 2  3. A   4. A  
 2 1 1 3 1 2 2 1 2 0 0 1 2 1
       
3 1 1 2  0 3 1 4 

2.5 Elementary Operations and Determinants


We see how certain column operations simplify the calculation of determinant. We shall use three
elementary column operations which are analogue to the elementary row operations already discussed.
These are:
1. Interchange of two columns of A.
2. Multiply a column of A by a scalar C, C  0.
3. Add a scalar multiple of one column of A to another column of A.
Let us describe how the determinant of a matrix A changes when an elementary column operation is
applied to A.

FIRST ONE
Theorem: 1Let A = [A1, A2… An] be an (nn) matrix. If B is obtained from A by interchanging two
columns (or rows) of A, then det (B) = – det (A).
Example 6 Verify the above theorem for (22) matrix
15

Verification: Let B denote the matrix obtained by interchanging the first and second columns of A. Thus
a12 a11
B is given by B 
a22 a21
Now, det  B   a12 a21  a11a22 , det  A   a11a22  a12 a21
Thus det  B    det  A  .
Theorem 2: If A is an (nn) matrix and if B is the (nn) matrix resulting from multiplying the kth column
(or row) of A by scalar C, then det (B) = C det (A)
a11 a12
Example 7 Verify the above theorem for the (22) matrix A 
a21 a22
Verification: Consider the matrices A and A give by
 ca a   a ca12 
A   11 12  A   11 
ca21 a22   a21 ca22 
Clearly det  A   ca11a22  ca21a12  c  a11a22  a21a12   c det  A.
Similarly, det  A   ca11a22  ca21a12
We note that the above theorem is not valid for c = 0. That is, if a column of a matrix A is zeros, then
det (A) = 0.
Corollary 3: Let A be (nn) matrix and let c be a scalar, then det  cA   c n det  A 
Proof: Exercise
1 2
Example 8 Find det (3A) where A   
4 1
 3 6
Solution: 3A    Thus det (3A) = 9 – 72 = –63  32 det A  9 x  7  63.
12 3 
Theorem 4: If A, B and C are (nn) matrices that are equal except that the 5th column (or row) of A is
equal to the sum of the 5th columns (or row) of B and C, then
Det (A) = det = det (B) + det (C)
Example 9: Verify theorem 4 where A, B and C (22) matrices such that the first column of A is equal to
the sum of the first columns of B and C.
 b1   c   b c 
Thus: B    C  1  A 1 1 
 b2    c2    b2  c2  
det  A     b1  c1     b2  c2   b1  c1   b2   c2   b1   b2    c1   c2 
 det  B   det  C 
Example 10 Given that det (B) = 22 and det (C) = 29, find det (A), where
1 3 2 1 1 2 1 2 2

A  0 4 7  B   0 2 7  C   0 2 7 
2 1 8 2 0 8 2 1 8
     
Solution A1  B1  C1 ; A3  B3  C3 . But A2  B2  C2 .
Thus, det  A   det  B   det  C   22  29  51
Theorem 5: Let A be an (nn) matrix. If the jth column (or row) of A is a multiple of the
kth column (or row) of A, then det (A) = 0.

Theorem 6: If A is an (nn) matrix, and if a multiple of the kth column (or row) is added to the jth
column (or row) then the determinant is not hanged.

Example 11 Use elementary column operations to simplify the matrix A and find det (A)
1 2 0 2
 1 2 3 1 
A
 3 2 1 0
 
 2 3 2 1
16

1 2 0 2 1 1 0 0 0
1 2 3 1 1 1 3 1 1 4 3 3
Solution: det  A    
3 2 1 0 3 2 1 3 3 3 1 6
2 3 2 1 0 1 2 1 2 7 2 3
4 3 3
Thus it follows that det (A) is given by det A  8 1 6
7 2 3
We wish to create zeros in the (1, 2) and (1, 3) positions of this (3, 3) determinant. To avoid using
fraction, we multiply the second and third columns by 4 and then add a multiple of -3 times column 1 to 2
and 3.
4 3 3 4 12 12 4 0 0
1 1
det  A  8 1 6  8 4 24  8 28 0
16 16
7 2 3 7 8 12 7 13 9
Thus we find that det (A) = -63.

Example 12 Use column operations to find det (A) where


0 1 3 1
 1 2 2 2 
A 
 3 4 2 2 
 
 4 3 1 1 
Solution: As in Gaussian elimination, column interchanges are sometimes desirable and serve to keep
order in the computation. Consider
0 1 3 1 1 0 3 1
1 2 2 2 2 1 2 2
det  A   
3 4 2 2 4 3 2 2
4 3 1 1 3 4 1 1
We use column 1 to introduce zeros along the first row.
1 0 0 0
1 4 4
2 1 4 4
det  A    3 10 5
4 3 10 6
4 10 2
3 4 10 2
Again, column 1 can be used to introduce zeros
1 0 0
22 18 22 1
det  A   3 22 18    18  72
26 18 26 
4 26 18
Exercises: (A) Use elementary column operation to create zeros in the last two entries in the first row
and then calculate the determinant of the original matrix
 1 2 1  2 4 2   2 2 4  0 1 2
(1)  2 0 1 (2)  0 2 3  (3)  1 0 1 
    (4)  3 1 2
 
 1 1 1 1 1 2   2 1 2  2 0 3
       
B) Use only column interchange to produce a triangular matrix and then determine then determine the
determinant of the original matrix.
1 0 0 0 0 0 2 0 0 1 0 0
2 0 0 33  0 0 1 3  0 2 0 3 
(1)  (2)  (3) 
1 1 0 1 0 4 1 3 2 1 0 6
     
1 4 2 2 2 1 5 6 3 2 2 4
C) Use elementary column operations to create zeros in the (1,2), (1,3), (1,4), (2,3), (2,4) positions.
Then calculate the original determinant.
17

1 2 0 3 2 4 2 2 1 1 2 1
2 5 1 1 1 3 1 2 0 1 4 1
(1) (2) (3)
2 0 4 3 1 3 1 3 2 1 3 0
0 1 6 2 1 2 1 2 2 2 1 2

2.6 Solution to System of Linear equations by Cramer’s rule


NON-SINGULAR MATRIX
Definition: An (nn) matrix A is non singular if the only solution to AX = C i.e. X = 0. Furthermore, A is
said to be singular if A is not non singular.
Theorem: The (nn) matrix  A1 , A2 ,..., An  is non singular if and only if  A1 , A2 ,..., An 
is a linearly independent set.
1 3  1 2 
Example 1Determine whether each of the matrices A   , B 
 2 2  2 4
is singular or non singular.
Solution: The augmented matrix [A: 0] for the system AX = 0 is now equivalent to
1 3 : 0 
0 4 : 0 
 
so the trivial solutions x1 = 0, x2 = 0 (or x = 0) is the unique solution. Thus A is non-singular.
 2 
On the other hand, B is singular because the vector X    is a non trivial solution of BX = 0.
 
1
Equivalently, the columns of B are linearly independent because 2B  B2  0 .

2.7 Application of Determinants


1) Solving AX = b with CRAMER’S RULE.
A major result in determinant theory is Cramer’s rule, which gives a formula for the solution of any
system AX = b when A is non singular.
Theorem 2 [CRAMER’S RULE]
Let A   A1 , A2 ,..., An  be a non singular (nn) matrix and let b be any vector in Rn. For each i, 1  i  n,
let Bi be the matrix Bi   A1 , A2 ,..., Ai 1 ,..., An 
det  Bi 
Then the ith component xi of the solution of AX = b is given by xi  (*)
det  A
APPLICATION
3 x1  2 x2  4
Example 2Use Cramer’s rule to solve the system :
5 x1  4 x2  6
Solution: To solve this system by Cramer’s rule, we write the system as AX = b and we form
3 2  4 2 3 4 
B1  b, A2  and B2   A1 , b . A    B1  6 4 B2   
5 4    5 6 
det  A   2, det  B1   4, det  B2   2.
4 2
From (*) x1   2, x2   1
2 2
x1  x2  x3  0

Example 3Use Cramer’s rule to solve the system: x1  x2  2 x3  1

x1  2 x2  x3  6
Solution: Writing the system as AX = b, we have
1 1 1  0 1 1  1 0 1  1 1 0
A  1 1 2 B1  1 1 2  B2  1 1 2  B3  1 1 1 
1 2 1  6 2 1  1 6 1  1 2 6
18

Calculating the determinant we get det  A   9, det  B1   9, det  B2   18, det  B3   9


9 18 9
Then by (*) we have x1   1; x2   2; x3   1.
9 9 9
Exercises: Use Cramer’s rule to solve the given system
(1) x1  x2  3 (2) x1  3x2  4
x1  x2  1 x1  x2  0
(3) x1  x2  x3  2 (4) x1  x2  x3  x4
x1  2 x2  x3  2 x2  x3  x4  1
x1  2 x2  x3  4 x3  x4  0
x3  2 x4  3

2.8 Inverses of Matrix:


We know that det (AB) = det (A).det (B) and that det A  det  A .
T
 
We have noted that an (nn) matrix A is said to be non singular if the only solution to AX = 0 is x = 0.
Furthermore, A is said to be singular if A is not non singular.
Adjoint of Matrix: Let A be an (nn) matrix and let C denote the matrix of cofactors.
C = Cij is (nn) matrix
The adjoint matrix of A, denoted by Adj.(A) is equal to CT.
 1 1 2 

Example Find the Adj(A) if A  2 1 3

 
4 1 1 
 
The nine required cofactors of A are
A11  4, A12  14, A13  2 , A21  3, A22  7 , A23  9,
A31  1 , A32  7, A33  3
 1 14 2   4 3 1
 
So, C  3 7 5 . t he Adj (A) = C T  14
 7 7 
  
1 7   2
5 3 
 3  
1
Theorem: If A is a (nn) non singular matrix, then A 
1
 adj  A 
det  A 
1 2 
Example: Find the adjoint matrix for the given matrix A and then the inverse the A. (a) A   
3 4 
Cofactors of A are A11 = 4, A12 = -3A21 = -2, A22 = 1
 4 3  4 2 
So, C     Adj  A  C T   
 2 1   3 1 
1 1  4 2 
A1   adj  A  . But det (A) = 4 – 6 = –2. So, A1 
2  3 1 
Then
det  A 
1 0 1
(b) A   2 1 2 
1 1 2 
Solution: Cofactors of A are
A11  0, A12  2, A13  1, A21  1, A22  1, A23  1, A31  1, A32  0, A33  1
 0 2 1   0 1 1
So,
C 
 1 1 
1  Adj A  C  
T
 2 1 0 
, det  A   1.
 1   1 1 1 
 0 1   
 0 1 1  0 1 1
1
0   0 
1
So, A 1
 Adj  A   2 1   2 1 
det  A 1   1
 1  1 1   1 1 

Exercise:s Find the adjoint matrix of the given matrix A; hence find the inverse of the matrix
2 1 0 1 1 1 1 2 3
A   3 1  A  1 2  A   0 2 
. (1) 0 (2) 2 (3) 1
0 1  1 1  0 1 
 1  3  0
19

Theme 3: Numerical Methods.


Numerical Solution of Algebraic and Transcendental Equation
A polynomial equation such as
an xn  an1a n1   a0  0 or an equation f (x) where f (x) is reducible to a polynomial
expression is called an algebraic equation.
If f (x) = 0 is not reducible to an algebraic equation, we say that the equation is transcendental.
Note that x is a zero or root of the unction f (x) if x satisfies f (x) = C. Note also that the root of f (x) = 0 is
a point at which the graph of f (x) crosses that axis.
Alternatively, if we can write f  x   g  x   h  x 
Then a root of f (x) = 0 is a point where the graph of g(x and h(x) meet. y

Root

y
Root
Zero
x y =h(x)
Root
Zero
y = g(x)

x
Example
Use the graphical method to find the zero of the function
f  x   x4  2x  1
Solution
Let g  x   x 4 , h  x   2 x  1

 f  x  g  x  h  x

3.1 Determination of the Zeros of Function by Iteration


1) Method of Tangents (Newton-Raphson Process)
We seek for solutions of the equations which can be generalized by f (x) = 0. From our graphical work,
we know that the solutions of f (x) = 0 occur when f(x) crosses the x-axis.
We take this our approach, and consider the graph of y = f (x) as shown below.

x0 A3(x3,0) A2(x,0) A1(x1,0)

Our method is to guess an approximate solution as A1, where x = x1, which is reasonably close” to the
point x0 representing the exact solution. we construct the tangent to y = f (x) at the point P1 (x1, f (x)).
From the diagram, it is clear that x2 is a better approximation to the solution than x1.
20

We then repeat the process to find A3 (x3, 0) which is yet closer to x0. Clearly, we may repeat this
process as many times as we like to achieve the necessary accuracy.
P1 A1 f  x1 
Considering the figure above, from  A1 A2 P1, tan P1 Aˆ 2 A1  
A1  A2 x1  x2
But tan P1 Aˆ 2 A is the slope of the curve at P1, then tan P1 Aˆ2 A1  f   x1  .

So

f  x2  f  xn 1 
In the same way. x3  x1  . Generally, xn  xn 
f   x2  f   xn 1 
This is known as the Newton-Raphson process for the iterative solution.. It is important to realize that the
first guess of a solution has to be reasonably close. It is also important to be aware of any discontinuities
within the graph when making the first guess.
Consider the diagram

A1
0
x
x0

We see that a first guess at A1 will not give better approximation to x0 which is on the other side of the
discontinuity.

Example 1 Find an approximate solution of the equation x2 – 27 = 0 taking x1 = 5 as first guess.

Solution: We let f  x   x 2  27 , f   x   2 x, x1  5.
f  x1   2 
Hence f  x1   2, f   x1   10. Thus x2  x1   5     5.2
f   x1   10 
0.04
f  x1    5.2   27  0.04, f   x2   f   5.2   10.4so: x3  5.2   5.196 to 3 decimal places.
2

10.4
Repeating the process, with x = 5.196, we have f  x3    5.196   27  0.002, f   x3   10.392
2

 0.002 
 x4  5.196     5.196 to 3 d.p.
 10.392 
Since x3 and x4 are the same to accuracy of 3. d.p., there is no need continuing further. So we have found a
reasonable approximation to the solution.

Example 2 Solve x cos x = 0 giving the answer correct to 3 decimal places using x1 = 0.7 as the first guess
(angle is in radians).
Solution: Let f  x   x  cos x, f   x   1  sin x, x1  0.7, f  x1   f  0.7   0.7  cos  0.7   0.65
f  x1   0.065 
f   x1   f   0.7   1.644  x2  x1   0.7     0.739
f   x1   1.644 
Repeating f  x2   f  0.739   0.739  cos  0.739   0.0001, f   x2   1  sin 0.739  1.674
21

 0.001 
 x3  0.739     0.739 to 3 d.p. Hence x = 0.739 to 3 decimal places.
 1.674 
Example 3: Solve f  x   8 x 3  24 x 2  24 x  1.02 using x1 = 0 as a first guess.
f  xn 1 
Solution: xn  xn 1  n  2,3, 4,... f   x   24 x 2  48 x  24
f   xn 1 
f  xn1 
n xn1 f  xn 1  f   xn 1  f   xn1 
2 3 1 – 1.02 24 0.0425
3 0 – 0425 –0.0.4274 22.00344 0.001742
4 0.34444 –0.000138 21.91428 0.00006
5 0.44446 … … …

Exercises
1. Find correct to 3 decimal places, the root of x3  9 x  1  0 which is near to x = 3.
2. Solve f  xn 1  2sin x  x, by first sketching the two graphs y = 2sin x and y = x to obtain a
reasonable guess for your first estimate. Find the solution correct to 3 decimal places.
3. Solve the equation using the Newton Raphson method with a first estimate of x = 0.05. Give
your answer accurate to 3 decimal places (x is in radians).
4. Find, correct to 3 decimal places, the solution to t 4  t  3  0 near to t  1.4 .

3.2 Method of Chord


If x0 is a root of f (x) = 0, then the first step is to determine the two points x1 and x2 on both sides of x0.
We must ensure that the interval  x1 , x2  contains only one root of the equation f (x) = 0.
The interval x , x 
1 2
is called a separation interval for the root x0. One must ensure that the separation
interval is as small as possible. Let  x1 , x2  be a separation interval for the root as shown below

y
P1 (x1, f(x1))
x4

x2 x3
x1 x
x0

P2 (x2, f(x2))

Considering the above graph, we see that chord joining P1 and P2 crosses the x-axis is at a point x3,
which is closer to x0 than either x2 or x1.
x3  x2 x x
To compute x3, we note from similar triangles  1 3
 f  x2  f  x1 
22

x2 f  x1   x1 f  x2 
From this we get x3 
f  x1   f  x2 
To compute the next approximation, we note also that the chord P1P3 now crosses the x-axis at x4 which is
x3 f  x1   x1 f  x3 
much closer to x0 than x3. Thus we calculate x4 
f  x1   f  x3 
xn1 f  x1   x1 f  xn 1 
Continuing the above process, we have xn 
f  x1   f  xn1 
xn  f  x1   f  xn1    xn1 f  x1   x1 f  xn1   xn1 f  x1   x1 f  xn1   x1 f  x1   x1 f  x1 

  xn 1  x1  f  x1    f  x1   f   x
n 1 1

x n 1
 x1  f  x1    f  x1   f  xn1   x1
 xn 
f  x1   f  xn1 

 xn  x1 
x  x  f x ,
n 1 1 1
n  3, 4,.... or , xn  x1   x1  xn1 
f  x1 
n  3, 4,....
f x   f x 
1 n 1 f  x1   f  xn 1 
f  x0  x0  xn 
Preferably, this formula is used in the form, xn 1  x0  , r  1, 2,3,....
f  x0   f  xn 
f  x0  x0  xn 
xn1  x0 
f  xn   f  x0 
Example 1: Find a root of the equation 8 x3  24 x 2  24 x  1.02  0.
Solution: Find the separation interval. We note that
x0  0, f  x0   1.02, x1  1, f  x1   6.98
Thus the separation interval (0, 1) contains one or three roots. If it contains three roots, then there must be
a maximum and a minimum in this interval.
f   x   24 x 2  48 x  24  0 or x 2  2 x  1  0   x  1  0
2

So, x1 = 1 is the only turning point of f(x).


Hence there can be only one zero in (0, 1). Thus we set up a table as follows:
n xn f  xn  xn  x0 f  xn   f  x0   xn  x0 
 f  x0   
 f  xn   f  x0  
1 1.0000 6.980 1.000 8.000 0.1275
2 0.1275 1.667 0.1275 2.687 0.0484
3 0.0484 0.086 0.0484 1.106 0.0446
4 0.0446 0.004 0.0446 1.024 0.0444
5 0.0444 -0.0021 0.0444 1.019 0.0444

8
Example 2 Solve x3  3 x 2  0
3
8
Solution: Let f  x   x 3  3x 2   0 . So f   x   3 x 2  6 x  0 at turning point
3
x  3 x  6   0  x  0 or x  2. Then f   x   6 x  6. at x  0, f   x   6
 x = 0 is local maxi.
8 1
At x = 2, f   x   6  x = 1 is a local mini. f  0   and f  2   1
3 3
y
3

2
P2
2 P3 1 1 2P3
1 0 3

1


23

So, its graph crosses x – axis at three points. We look for separation intervals for each of these points.
 1 
Thus, one root at P1 I in the separation interval (1, 0) or better still  1,  . another root is at P2 in the
 2
separation interval (0, 2). The third root is at P3 in the separation interval (2, 3). For root at P2, separation
interval is (0, 2).
Set x0  0, f  x0   f  0   2.667, x1  2, f  x1   f  2   1.333
n xn f  xn  xn  x0 f  xn   f  x0   xn  x0 
 f ( x0 )  
 f  xn   f  x0  
1 2.000 -1.333 2.000 -4.000 1.3335
2
3
24

Theme 4: Optimization: Linear programming.


A linear programming problem requires that a linear function

H  C1 x1  C2 x2   Cn xn
which is called objective function, be minimized or maximized subject to constraints of the form.
a11 x1   ain xn  bi

xj  0
Where i = 1, 2… m and j = 1, … , n
Comments: The objective function and all the constraints are linear in the variables x e.g.
H  2 x1  5x2  10.
We stated the constraints in the form of: Ax  b
Note that alternative form viz:
i) 2 x  5 y  3z  25 or (ii) x + y = 3 may be reduced to the above.
That is reducing all the constraints to  notation.
i) 2 x  5 y  3z  25  2 x  2 y  3z  25
i.e. multiplying the inequality () by (1) to reserve the sign of the inequality.
ii) x  y  3  x  y  3  x  y  3  x  y  3   x  y  3
3) The constraints x  0
There are also many problems where negative values of some x’s are meaningful.
Suppose that a variable x can have both positive and negative values, we say that x is free or we do not
impose a restriction x  0
Feasible Point: A point (x1, x2, …, xn) is called feasible if it coordinates satisfy all m + n constraints.

How Do We Solve L.P. Problems

4.1 Geometric View


When only two variables are involved, it is convenient to interpret the entire problem geometrically. We
need to obtain the point(s) among the infinite number of feasible points which yield the optimum values.
Simplex method, as we shall see, showed that a solution to the problem lies not only at the boundary
points, but an extreme point of the constraint set.
This reduces the problem to a search over a finite number of extreme point. At worst, we could be
exhaustive enumeration of these points.
Example A
1) Find x1 and x2 satisfying the inequalities x1  0, x2  0, -x1 + 2x2  2, x1 + x2  4,  3 and such that the
function f = x2 – x1 is maximized.
Solution:
y

6
x=3
5

4 x1 + x2 = 4

3
(2, 2) x1, + 2x2 = 2
2
1 (3, 1)
(0,1)

2 1 (0, 0) 1 2 3(3, 0) 4 5 x

Note that the extreme points are (0, 0), (2, 2), (3, 1) and (3, 0).
F = x2 – x1
At (0,0) F=0–0=0
At (0, 1) F=1–0=1
At (2, 2) F=2–2=0
At (3, 1) F = 1 – 3 = -2
25

At (3, 0) F = 0 – 3 = -3
Thus the maximum is F = 1 and it occurs at (0, 1).
Example A
2) With the same inequality constraints as in Example 1 above, find (x1, x2) such that G = 2x1 + x2 is
maximized.
Solution: F = 2x1 + x2
At (0,0) G=0–0=0
At (0, 1) G=20+1=1
At (2, 2) G=22+2=6
At (3, 1) G=23+1=7
At (3, 0) G=23+0=6
Thus the maximum is 7 it occurs at a point (3, 1). i.e. when x1 = 3, x2 = 1.

Exercises
1) Find the optimal values of x1 and x2 by the graphical method that maximizes z = 3x1 – 2x2
3x1  x2  10
subject to x1  2 x2  15
x1  0,  x2
2) Find the optimal values, of x and y by the graphical method that maximize F = 2x + 3y
3 x  2 y  2
x  y  2
subject to
x y 8
3 x  y  12
3) Find the optimal values of x and y by the graphical methods, that
(a) maximize z = x + 3y
5 x  10 y  20
subject to x  4 y  4
x, y  0
(b) Use the same feasibility region and minimize the objective function z = -5x + 2y
4) Use graphical method to find x and y that
(a) Maximize z = 5x + 2y
Subject to : 2x – 5y  -10
2x + y  -6
x  0, y  0.
(b) Use the same feasibility region and minimize the objective function
z = -5x + 4y

4.2 The Simplex method :


We illustrate this using examples.
Example (1) Find x1 and x2, by simplex method, satisfying the inequalities
x1  0, x2  0,  x1  2 x2  2, x1  x2  4, x1  3
and such that the function H  x2  x1 is maximized.
Solution: Objective function H  x2  x1
We start by introducing slack variables to the constraints.
 x1  2 x2  s1 2
x1  x2  s2 5
x1  s3  3
Since the origin is an extreme feasible point, we may choose
x1  x2  0, s1  2, s2  4 and s3  3 to start.
26

We display it in a tabular form.


x1 x2 s1 s2 s3 b
-1 2 1 0 0 2
T0: 1 1 0 1 0 4
1 0 0 0 1 3
-1 1 0 0 0 0
Obj. Fun. H
Decision variable Slack variables RHS

From the objective function row, we note that 1 is the only positive number present. This determines the
pivot column.
2 4
In this column, we have two positive entries, but noting that  1 is less than , we make 2 a pivot pint
2 1
to get T1.

T1 is obtained by first making the pivot to be unity and then eliminating every other entry in its column.
Thus:
x1 x2 s1 s2 s3 b
1 1 1 0 0 1
2 2
T1: 3 0 1 1 0 3
2 2
1 0 0 0 1 3
1 0 1 0 0 -1
Obj. Fun. H 2 2
Decision variable Slack variables RHS

Note that no entry in the objective function row is positive. This shows that the maximum of H = x2 – x1 is
equal to 1 i.e. negative of (-1), which appears in the right hand side of the objective function row.
Note also that this agrees with our graphical method i.e.
x1  0, x2  1, H 1
Example 2 Solve Example A No. 2 by simplex method.
Solution: Slack variables and constraints are the same as in No. above. We want to
maximize G = 2x1 +x2.
x1 x2 s1 s2 s3 b
-1 2 1 0 0 2
T0: 1 1 0 1 0 4
1 0 0 0 1 3
2 1 0 0 0 0
Obj. Fun. H
Decision variable Slack variables RHS
Note that both 2 and 1 are positive in objective function row. So we have a choice.
27

3 4
Selecting 2, we make the circled 1 a pivot since is less than . We get T1 by the process of elimination.
1 1
x1 x2 s1 s2 s3 b
0 2 1 0 1 5
T0: 0 1 0 1 -1 1
1 0 0 0 1 3
0 1 0 0 -2 0
Obj. Fun. H
Decision variable Slack variables RHS

Now, we have no choice, the new pivot is the circle 1. We then get T2 as shown below.
x1 x2 s1 s2 s3 b
0 0 1 -2 3 3
T1: 0 1 0 1 -1 1
1 0 0 0 1 1
0 1 0 -1 -1 -7
Obj. Fun. H
Decision variable Slack variables RHS

No entry in the objective function row is positive, so we stop. This means that the maximum of G is
negative of (-7) = 7.So, G has maximum of 7 at it occurs when x1 = 3, x2 = 1 as before.
3) Solve by simplex method: maximize 2x + 3y
subject to 3x  2 y  2,  x  y  2, x  y  8, 3x  y  12, x  y  0
Solution: We have 3x  2 y  s1  2
 x  y  s2  2
x  y  y  s3  8
3x  y  s4  12
x, y, s1 , s2 , s3 , s4  0.

x Y s1 s2 s3 S4 b
-3 2 1 0 0 0 2
-1 1 0 1 0 0 2
T0
1 1 0 0 1 0 8
3 -1 0 0 0 1 12
2 3 0 0 0 0 0
Obj. Fun.
x y s1 s2 s3 s4 b
0 1 1 0 0 1 14
0 2 0 4 0 1 6
3 3
T1 0 4 0 0 1 1 4
3 3
1 1 0 0 0 1 4
3 3
0 11 0 0 0 2 -8
Obj. Fun. 3 3
x y s1 s2 s3 s4 b
0 0 1 0 3 5 11
4 4
T2
0 0 0 0 1 1 4
2 2
28

1 1 0 0 3 1 3
4 4
3 -1 0 0 1 1 5
4 4
0 0 0 0 11 1 -19
Obj. Fun.
4 4
0 0 1 5 1 0 1
2 2
0 0 0 2 -1 1 8
T3 0 1 0 1 1 0 5
2 2
1 0 0 1 1 0 3
2 2
0 0 0 1 5 0 -21
Obj. Fun.
2 2

Optimal point x = 3, y = 5, s1 = 1, s2 = s3 = 0, s4 = 8. Obj. function value = 21.


We have to note that we can also solve a L.P. problem by putting negative before objective function z and
then solve as shown below. The solution i.e. the maximum value is then read directly from the table.
Example 4 Use the simplex method to solve the following linear programme
maximize z  18 x1  20 x1  32 x3

subject to: x1  2 x2  2 x3  22, 3x1  2 x2  4 x3  40, 3 x1  x2  2 x3  14, x1 , x2 , x3  0


Solution: x1  2 x2  2 x3  s1  22
3x1  2 x2  4 x3  s2  40
3x1  x2  2 x3  s3  14
x1 x2 X3 s1 s2 s3 b
1 2 2 1 0 0 22
T0: 3 2 4 0 1 0 40
3 4 2 0 0 1 14
Obj. Fun. -18 -20 -32 0 0 0 0

Note that because we multiplied objective function by 1, we look for negative entries in the objective
function row. Here we have a choice since there are 3 negative entries. Suppose we choose -32, then the
circled 2 becomes the pivot determined as before. We then proceed to get T1.

x1 x2 X3 s1 s2 s3 b
-2 1 0 1 0 -1 8
-3 0 0 0 1 -2 12
T1:
3 1 1 0 0 1 7
2 2 2
Obj. Fun. 30 -4 0 0 0 16 224

Since there is still a negative entry in the objective function row, we make it a pivot column, thereby
choosing the circled 1 a pivot. So we get T2.
x1 x2 X3 s1 s2 s3 b
-2 1 0 1 0 -1 8
-3 0 0 0 1 -2 12
T2:
5 0 1 1 0 1 3
2 2
29

Obj. Fun. 22 0 0 4 0 12 256

There are no negative entries in the objective function row. So we stop, as this is the final simplex table.
So the maximum value of z is 256 and it occurs when x1 = 0, x2 = 8 and x3 = 3. Note that the maximum is
read as its is
Exercises Solve the following linear programming problems using simplex method.
1) Maximize z  2 x1  3x2 ( 2) Maximize z  6 x1  3x2  x3
Subject to 4 x1  2 x2  9 Subject to 4 x1  5 x2  2 x3  11
x1  3x2  7 x1  3x2  x3 7
x1 , x2  0 3x1  x2  4 x3  8
x1 , x2 , x3  0
3) Maximize z  2 x1  5 x2  4 x3 ( 4) Maximize z  2 x2  4 x3
Subject to 3x1  2 x2  4 x3  12 Subject to 3x1  4 x2 7
5 x1  3x2  8 x3  16  2 x1  6 x3  5
x1  6 x2  2 x3  4 4 x1  7 x2  2 x3  13
x1 , x2 , x3  0 x1 , x2 , x3  0.

4.3 Duality of Linear Programming


Definition : We introduce a new set of non negative variables w1, w2, …, wn called dual variables, one for
each constraints, and we let w   w1 , w2 ,..., wn 
Then, we define the dual program of the maximum program
maxmize z  cx
subject to: AX  B
X 0
minimise U  WB
to be the following minimum program: subject to: AtW t  C t
W 0
t
Note that the constraint come from the transpose A of matrix. Recall that the transpose is obtained by
interchanging the rows and columns.
Note also that the roles of C and B have been reversed and the constraint inequality is now “”
instead of “”.
Example 1Find the dual of the maximum program
maximize z  4 x1  7 x2
subject to: x1  5 x2  8
2 x1  3 x2  9
x1 , x2  0
Solution: We introduce w   w1 , w2  one variable per constraint.
The objective function is U  WB  8w1  8w2
We adjoin the constraints AtW t  C t
and the nonnegative requirement on the variables, we obtain the dual program.
minimise U  8w1  9 w2
subject to: W1  2 w2  4
5w1  3w2  7
w1 , w2  0
The original program is called the primal program. We now state the primal dual connections.
Primal Dual Connection
1) The constraint constants of one program are the objective function coefficients of the other program.
2) The constraint matrix is replaced by its transpose. Then if one program has m constraints and n
30

variables, the other program will have n constraints and m variables.


3) Maximization is performed subject to  constraints and minimization is performed subject to 
constraints.
Example 2 Find the dual of the maximum program
maximize z  10 x1  12 x2  8 x3
subject to:  4 x1  8 x2  4 x3  16
4 x1  2 x2  2 x3  20
x1 , x2 , x3  0.
Solution: Since there are 2 constraints, we introduce 2 nonnegative variables w1, w2. We form the dual
program as follows: The new objective is formed from the constraint constants 16 and 20. We have
 4 8 4 
Here U  16w1  20w2 . A  The constraint constants will be the coefficient of z.
 4 2 2 
 4 4   10
   
 
So we get At    8 2     12 
w1
w1
   
 w2   4 2   w2   8 
   
The complete dual program is therefore as follows: minimise U  16w1  20w2
subject to:  4w1  4w2  10

 18w1  2 w2  12

4w1  2 w2  8

w1 , w2  0.

4.3.1 Solving Dual Programs:


We now describe how the simplex method for maximum programs is used to find the optimal solutions of
both primal and dual programs. In fact, the solution to the dual program has been available in the final
simplex table of a maximum program. The solution of the dual program is located in the final table in the
objective function row in the slack variables columns. The location is shown below:

Soln. of dual program optimal value


* * * *
Slack variables
Final Simplex Table
Example 3 For the maximum program in Example 2, use the simplex method to find optimal solution and
then verify that the values in the objective function row of the slack variables columns in the final table
give the optimal solution of the dual program.
Solution: We adjoin slack variable s1  0, s2  0 to get
z  10 x1  12 x2  8 x3  0

4 x1  8 x2  4 x3  s1  16

4 x1  2 x2  2 x3  s2  20
x1 x2 x3 s1 s2 b
-4 -8 4 1 0 16
T0
4 -2 2 0 1 20
10 12 -8 0 0 0
Obj. Fun.
x1 x2 x3 s1 s2 b
-1 -2 1 1 0 4
4
T1
6 2 0 1 1 12
2
2 -4 0 2 0 32
Obj. Fun.
x1 x2 x3 s1 s2 b
31

5 0 1 1 1 16
4
T2
3 1 0 1 1 6
4 2
4 0 0 1 2 56
Obj. Fun.
x1 x2 x3 s1 s2 b

From this optimal table, the maximum of z = 56 is at (x1, x2, x3) = (0, 6, 16)
We look in the slack variable columns. We take the numbers in the objective function row and
set (w1, w2) = (1, 2).
By direct substitution, we can verify that this is the feasible solution for the dual programme.
4 1  4  2   4  10
8 1  2  2   12
4 1  2  2   8
Thus (1, 2) is feasible. Furthermore, the dual objective function u = 16w1 + 20w2 has the value at (1, 2) of
U = 16(1) + 20(2) = 56. Since this is the value as was found for the primal objective function z, the
solution must be optimal for the dual.
Steps for Solving Dual Programs
Step 1: Adjoin slack variable to the maximum program and use simplex method to solve it.
Step 2: Obtain the optimal solution to the dual program from the objective function row of the slack
variables columns of the maximum program’s final table.
Example 4 Find the optimal solution of the linear program.
maximize z  6 x1  10 x2
subject to: 4 x1  6 x2  10
2 x1  4 x2  2
 4 x1  2 x2  2
x1 , x2  0
Then, construct the dual of the program and find its optimal solution. Verify that the solution is feasible
and optimal.

Solutio:nWe adjoin the slack variable s1, s2, s3 and then get in initial table T0.
x1 x2 x3 s1 s2 b
4 -6 1 0 0 10
T0 2 -1 0 1 0 2
-4 2 0 0 1 2
-6 10 0 0 0 0
Obj. Fun.
x1 x2 x3 s1 s2 b
0 2 1 -2 0 6
1 -2 0 1 0 1
T1
2
0 -6 0 2 1 6
0 -2 0 3 0 6
Obj. Fun.
x1 x2 x3 s1 s2 b
0 1 1 -1 0 3
2
T2 1 0 1 3 0 7
2
0 0 3 -4 1 24
0 0 1 1 0 12
Obj. Fun.
x1 x2 x3 s1 s2 b
32

Thus the optimal value on prime program is z = 12 and it occurs at  x1 , x2    2,3


The dual program is
minimise U  10w1  2w2  2w3

subject to: 4w1  4w2  4w3  6

 6w1  4w2  2w3  10

w1 , w2 , w3  0
We then take (  w1 , w2 , w3   1,1,0  from the objective function row of slack variable columns.
We check that it is feasible for the dual constraints as follows:
4 1  2 1  4  0   6, and  6 1  4 1  2  0   10
Finally, we calculate the dual objective function value U  10 1  2 1  2  0   12
These coincide with z = 12 and thus the solution is optimal.

4.4 Solving minimum programmes:


If we are given minimum program with  constraints, then we construct a maximum program with 
constraints such that the primal dual connection holds.
So we can solve a standard minimum program by
1. Constructing a maximum program
2. Solving the maximum program by simplex method
3. Using final table to get the solution of the minimum program.
Example 5 Solve the linear program
minimise U  8w1  12 w2
subject to:  6 w1  6 w2  2
w1  3w2  12
2w1  6 w2  12
w1 , w2  0.
Solution: We introduce variables x1, x2, x3 and construct the dual program.
maximise z  2 x1  12 x2  12 x3

subject to:  6 x1  x2  2 x2  8

6 x1  3 x2  6 x3  12

x1.x2 , x3  0.

w1 , w2  0.
With slack variables s1 + s2 we apply simplex method to get T0.
x1 x2 x3 s1 s2 b
16 1 2 1 0 8
T0
6 3 -6 0 1 12
2 -12 12 0 0 0
Obj. Fun.

x1 x2 x3 s1 s2 b
-8 0 1 1 4
4 3
T1
2 1 -2 0 1 4
3
Obj. Fun. 26 0 -12 0 4 48
33

x1 x2 x3 s1 s2 b
-2 0 1 1 1 1
4 12
T2
-2 1 0 1 1 6
2 6
2 0 0 3 3 60
Obj. Fun.
x1 x2 x3 s1 s2 b

Thus, we obtain w1 = 3, w2 = 3 as the optimal solution to the given minimum program and the minimum
value is u = 60.
Exercise Find the maximum program for which the given minimum program is the dual. Solve the
maximum program and obtain the optimal solution for the given program from the final simplex table.
1) minimize U  3w1  6 w2  21w3

subject to: 6 w1  3w2  7 w3  8

3w1  6 w2  8w3  1

 3w1  3w2  6 w3  9

w1 , w2 , w3  0.
2) minimize U  3w1  15w2  3w3

subject to: 9w1  16w2  6w3  12

3w1  9 w2  3w3  12

6 w1  10 w2  3w3  2

w1 , w2 , w3  0.

3) minimize U  10w1  2w2  6w3

subject to: 4w1  10w2  4w3  6

3w1  6 w2  2 w3  4

w1 , w2 , w3  0.
34

Theme 5: Optimization- Lagrange method.

We begin with what looks like a detour (round about way). To avoid having to make separate statements
for the two-and three variable cases, we will use vector notation. Throughout the discussion, f will be a
function of two or three variables continuously differentiable on some open set U.
We take C: r(t), t  I, as a curve which lies entirely in U and has at each point a non-zero tangent vector
r(t). The basic result is thus:
if x0 maximizes (or minimizes) f(x) or C, then  f (x0) is perpendicular to C at x0.

Proof: Choose t0 so that r(t0) = x0.


The composition f (r(t)) has a maximum (or minimum) at t0. Consequently its derivative
 f  r  t     f  r  t   , r   t  Must be zero at t0.
d
dt 
0  f  r  t0   .r   t0   f  x0  .r   t0 
This shows that: f  x0   r   t0 
Since r   t0  is tangent to C at x0 , f  x0  is perpendicular to C at x0.
Suppose that g is a continuously differentiable function of two or three variables defined on a subset of
the domain of f.
Suppose further that the gradient g is never zero. Lagrange made the following observation:
If x0 maximizes (or minimizes) f (x) subject to the side condition g(x) = 0, then f (x0) and g(x0) are
collinear. Consequently, there exists a scalar  such that
f  x0   g  x0  . Such  is now called a Lagrange multiplier.

Example 1 Maximize and minimize f  x, y   xy on the unit circle x 2  y 2  1


Solution: Set g  x, y   x 2  y 2  1
We want to maximize and minimize f  x, y   xy subject to the condition g  x, y   0 .
First, we look for those points (x, y) which satisfy a lagrange condition.
f  x , y    g  x , y  .
The gradients are f  x, y   yi  x j and g  x, y   2 xi  2 y j
Setting f  x, y   g  x, y  , we obtain y  2 x , x  2 y
Multiplying the first equation by y and the second equation by x, we find that: y 2  2 xy, x2  2 xy.
and thus x2 = y2.
1 1
The side condition x2 + y2 = 1 now implies that 2x2 = 1 and therefore x    2.
2 2
The points under consideration are
1  1
 2   ,  
  1 
1 1 1 1 1
 2, 2 ,  2, 2, 2  and   2,  2 .
2 2  2 2   2 2   2 2 
1
At the first and fourth points, f takes on the value of . At the second and third points, f takes on the
2
1
value . So, the first is a maximum and the second is a minimum.
2

Example 2 Find the minimum value taken by the function f  x, y   x   y  2 


2 2

on the hyperbola x 2  y 2  1.
Solution: We set g  x, y   x 2  y 2  1  0 . The point of interest must satisfy the Lagrange condition.
f  x, y   g  x, y  together with g  x, y   0.
Here f  x, y   2 xi  2  y  2  y j and g  x, y   2 xi  2 y j
The condition f  x, y   g  x, y  gives 2 x  2 x, 2  y  2   2 y
which we can simplify to x   x, y  2   y
The point of interest must therefore satisfy the following three equations:
35

x   x, y  2   y, x2  y 2 1  0
The last equation shows that x cannot be zero. So dividing x = x by x gives  = 1. This means that
y – 2 = -y and therefore y = 1, with y = 1, the last equation gives x   2. so; the points to be checked
are therefore    
2,1 ,  2,1 . At each of these points f takes on the value 3. This is the desired
minimum.
Example 3 Maximize f  x, y, z   xyz
Subject to x2  y 3  z 3  1, for x, y, z  0.
Solution: We set g  x, y , z   x 3  y 3  z 3  1  0
We seek for (x,y,z) that satisfy Lagrange condition simultaneously.
i.e. f  x, y, z   g  x, y , z  and g  x, y , z   0.
we get f  x, y, z    zi  xz j  xyk , and g  x, y , z   3x 2 i  3 y 2 j  3z 2 k
Setting f  x, y , z   g  x, y , z  we get yz   3x 2 , xz   3 y 2 , xy   3z 2
Multiplying the first equation by x and second by y and third by z
We get xyz   3x3 , xyz   3 y 2 , xyz   3z 3
and consequently.  x3   y 3   z 3
We can exclude  = 0 because if  = 0 then at least one of the other variables would have to be zeros.
That would force xyz to be zero, which is obviously not a maximum.
Having excluded  = 9, we can divide by  to get x3 = y3 = z3 and thus x = y = z.
1 1 1
1 3 1 3 1 3
the side condition x + y + z = 1 now give x    ,
3 3 3
y    , z 
 3  3  3
1
The desired maximum is
3
Exercises (1) Maximize x2 + y2 on the hyperbola xy = 1
(2) Maximize xy on the ellipse b2 x 2  a 2 y 2  a 2b 2 (3) Maximize xy2 on a unit circle x 2  y 2  1
(4) Maximize xyz on a unit sphere x 2  y 2  z 2  1 (5) Minimise x + 2y+ 4z x + y2 + z2 = 7
(6) Maximize 2x + 3y + 5z on the sphere x2 + y2 + z2 =19
(7) Minimize x4 + y4 + z4 on the plane x + y + z = 1.
36

Theme 6: Calculus of finite differences.


The calculation of finite difference deals with the changes in the dependent variables due to changes in
the independent variables.
Let y = f(x) be a continuous function.
Then
y + ∆y = f(x+∆x)
∆yxo = f(xo+∆x) – f(xo)
This can be further be written as
∆yxo = yxo+∆x - yxo
If we take the length of the intervals on x-axis a1 is ∆x = 1 as the initial point (starting point) xo = 0, then
∆yo = y1 - yo
∆y1 = y2 – y1
∆y3 = y2 – y3 etc
∆yn = yn+1 – yn – first forward difference
Second forward difference are written as
(∆yo) = ∆(y1 – yo)
= ∆y1 - ∆yo
= y2 – y1 – (y1 – yo)
= y2 – 2y1 + yo
= ∆ yo = ∆y1 - ∆yo = y2 – 2y1 + yo
2

∆3yo = ∆2y1 - ∆2yo


= ∆y2 - ∆y1 – (∆y1 - ∆yo)
= y3 – y2 – (y2 – y1) – (y2 – y1 – (y1 – yo)
= y3 – 2y2 + y1 – (y2 – 2y1 + yo)
= y3 – 2y2 – y1 – (y2 – 2y1 + yo)
= y3 – 2y2 – y2 + y1 + 2y1 – yo
= y3 – 3y2 + 3y1 - yo
Forward difference table
x y ∆y ∆2y ∆3y ∆4
xo yo ∆ yo
2
∆ yo
3
∆3yo
xo+h x1 x1 y1 ∆2yo ∆3yo ∆3yo
xo+2h – x2 y2 ∆y1 ∆2y1 ∆3yo
xo+3h – x3 y3 ∆y2 ∆ y2
2

xo+4h – x4 y4

∆ryn = ∆r-1yn+1 - ∆r-1yn-1


rth forward difference
Backward difference
∆y1 = y1 - yo
∆y2 = y2 – y1
∆yn = yn – yn-1 backward differences
∆2yn = ∆yn - ∆yn-1
∆ryn = ∆r-1yn - ∆r-1yn-1
rth backward difference
x y ∆y ∆2y ∆3y ∆4y
xo yo ∆y1 = y1 - yo ∆2y2 = ∆2y2 - ∆2y1 ∆3y3 ∆4y4
x1 y1 ∆y2 = y1 – y1 ∆2y3 = ∆3y3 - ∆3y1 ∆4y4 ∆5y5
x2 y2 ∆y3 = y3 – y2 ∆2y3 = ∆3y3 - ∆3y2 ∆4y4 ∆5y5
x3 y3 ∆y4 ∆2y4
x4 y4

Central differences
δy½ = y1 – yo
y 3  y 2  y1
2

y n1
δy + 1 – yn = y 2
2 n 1
2

δ2y1 =
y  y
3 1
2 2
37

Table
X Y δy δ2y δ2y δ4y
xo yo y 1 δy1 y 3 δy2
2 2

x1 y1 y 3 δy2 y 3
2 2

x2 y2 y 5 δy2 y 5
2 2

x3 y3 y 7 Δy3
2

x4 y4

Example
1. Evaluate the first forward difference of ex
2. Evaluate ∆2(3ex) = D.(∆3rx) = ∆3(∆ex)
= 3∆(ex(eh-1)
= 3(eh-1) (ex(eh-1)
= 3ex (eh-1)2
3. Evaluate ∆(e log3x)
2x

NB ∆(f(x)g(x) = f(x+h) ∆g(x) + g(x) ∆fm


∆(f(x)g(x) = f(x+h) g(x+h) – f(x) g(x)
= f(x+h) g(x+h) – f(x+h) g(x) + f(x+h) g(x) – f(x) g(x)
= f(x+h) (g(x+h) – g(x) + g(x)(f(x+h) – f(x)
= f(x+h) ∆g(x) + g(x) ∆f(x)
∆(e2xlog3x) = e2(x+h) (log3(x+h) – log3x) + log3x (e2(xth)-e2x)
 h
= e2(2th) +  log1   ) + log 3n (e2x (eh -1)
 x
 h
= e2x (e2h log 1   + (e2h-1) log3x)
 x
g ( x)f ( N )gn
g ( xt ) f ( xph  g (nth) fpn
(gn) 2
 f ( x)  fx f ( xth) g ( x)  fexg( xth)
   
 gey  gey g ( xth) g(x)
= f(xth) – g(xth) g(x) eg(xth(g(x)- f(x) g(xth)
= g(x)
= f(x th) g(x) + f(x) g(x th) – f(x th) g(x) –f(x th) g(x) f(xg xth)
gu g
= f(x) (g(x) – g(x th) ) + g (x) (f(xth) – f(x)
gu g

g ( x)
 - f (x th)  g (x th) f(x)
g ng
f(x) - g(x) f (x) th)
= g(x)th)
g ( xth) g (x)
The shift operation E
Xf
Єyr = yrth
Є2yr = Yr t2h
Єnyr = yr + 2h
Unveue shift operation
Є-1 yr = yr t(-h) = yr-h
∆yr = yr th - yr = Єyr –yr
= gn (Є -1) yr
∆yr = (Є -1) yr
= ∆≡ Є – 1 ----------------(1)
38

Є=∆+1
Є-1 = 1 - --------------(1)
Є-1 = 1 -
Proof of
yr = yr – yr – h
= yr – Є-1 yr
= 1 –Є-r
(3) Є = ∆ -------(3)
(4) Є = ∆ ------(4)
(5) => Є = Є = ∆ -----------------------(5)
(6) ∂ = Є1/2 - Є1/2 = Є1/2 - Є-1/2 -----------------(6)
Prof + 3
Є yr = Є (yr – yr-h) = Єyr – Єyr-h
= yrth - yr-hth
= yrth – yn
= ∆yr
= Є yr = ∆yr
=Є =∆
(7) E f(x) = f(xth)
= f(x) + hf(x) t h2 f11(x + ……….Talorcens
∞ h k 20   hk 
 k  0 1 D k f ( x)   k o 1 D k  f(x)
ko  ko 
= Єf(x) = ehD
Fundamental theme of Difference calculus if f(x) in a polynormial of degree in α the nth difference of f
in constant ie ∆nf = content
And ∆ntif(x) = O
Factional function
The production of the form
X(x – h) (x – 2h) ………(x –(n-1)h) is called a factions function denoted as x(n)
That is, x(n) = x (x –h) (x -2h) ……….. (x – (x -1)h)
Example
f(a + nh) = f(a) + oC1 ∆ f(a) + nC2 ∆2 f(a) + nen Dnf(a)
Examples
Represent the function x3 + 3x2 + 5x + 12 in factional notations
X3 + 3x2 + 5x + 12 = Ax(3) + βx(C)+ c(e) + D
= A (x (x -1) (x -2) + β(x(x-1) + C x + D
X + 3x2 + 5n + 12 = Ax (x- 1) (x -2) + βx (x – 1) + ( x + Dx20, D = 12
3

X = 1, 1 + 3 + 5 + 12 = C + D
(12). C = 24 -12 = 9
X 22 8 + 12 + 10 + 12 = 12β + 18 + 12
42 - 30   12
 6
2 2
X =3 27 + 27 + 15 + 12 = A (3(2) (1) + β (3) (2) + 3C + ∆
= 81 = 6A + 6B + 3C + ∆
81 = 6A + 36 + 27 + 12
A = 66/6 = 1
£ f(x) = x3 + 3x2+ + sx + 12 x(3) + 6 x(12). + 9 x (1) + 12
∆f = 3x(4) + 12x(1) + 9x
∆2f = 6x + 24
39

∆3f = 6
∆4f = 0
Find the function whole form difference y (1) x3 + 3x2 + sx + 12 (5) 9x2 + units
Change f(x) to factional notation
Fex = 9x2 t ux + s = Ax(2) + Bx (1) + C
= C = S, B = 20 A = 9
F(x) = gx2 + 20x(1) + 5
∆f(x) = 9x(2) + 20x (1) + 5
9 20
Intepraly x (3)  x 2 5 x  C
3 2
f(x) = 3x + 10 x + 5x (1) + C
(3) (2)

= 3x (x -1) + 10 (x(x-1) + 5x + C
=3x (x2 - 2x- x + 2) + 10x2 – 10x + snec
=3x2 + x2 + x + c
8n3 + x2 + x + c
Miscellaneous Exercises
1. (a) Find the maximum program for which the given minimum program is the dual.
(b) Solve the maximum program and obtain the optimal solution for the (i) primal
program and (ii) dual program from the final simplex table.

minimize U  3w1  15w2  3w3

subject to: 9w1  16 w2  6 w2  12

3w1  9 w2  3w3  12

6 w1  10 w2  3w3  2

w1 , w2 , w3  0.
2. Transform the augmented matrix of the following linear system to Echelon form. By back
substitution, find the solution of the systems
x  2 y  3z  w  2
3 x  8 y  4 z  4  5
2 x  4 y  5 z  5w  7
2 x  4 y  4 z  2w  10

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy