Ccs357 Optimization Techniques
Ccs357 Optimization Techniques
COURSE OBJECTIVES:
The objective of this course is to enable the student to
Formulate and solve linear programming problems (LPP) • Evaluate
Integer Programming Problems, Transportation and Assignment
Problems.
Obtain a solution to network problems using CPM and PERT techniques.
Able to optimize the function subject to the constraints.
Identify and solve problems under Markovian queuing models.
Charcteristics:
The objective function is of maximization type
All constraints are of ≤ type
All variables xiare non-negative
What is sensitivity analysis?
Sensitivity analysis investigates the changes in the optimal solution
resulting from making changes in parameters of the LP model.
Sensitivity analysis is also called post optimality analysis.
What are the characteristics of LPP in standard form?
The general linear programming problem in the form
Maximize or Minimize Z = c1 x1 + c2 x2 + …..+ cnxn
Subject to constraints
a11x1 + a12 x2 +…………+ a1nxn = b1
a21x1 + a22 x2 +…………+ a2nxn = b2
……………………………………………….
……………………………………………….
am1x1 + am2 x2 +…………+ amnxn = bm
and the non-negativity restrictions x1, x2,….xn ≥ 0 is known as standard
form.
In matrix notation the standard form of LPP can be expressed as:
Maximize Z = CX (objective function)
Subject to constraints AX = b and X ≥ 0
Characteristics:
The objective function is of maximization type.
All the constraint equation must be of equal type by adding
slack or surplus variables
RHS of each constraint must be positive type.
All variables are non-negative.
List all possible cases that can arise in sensitivity analysis with their
actions to obtain new solution.
Condition resulting from the Recommended action
changes
Current solution remains optimal No further action is necessary
and feasible
Current solution becomes Use dual simplex to recover
infeasible feasibility
Current solution becomes non- Use primal simplex to recover
optimal optimality
Current solution becomes both Use the generalized simplex
non-optimal and infeasible method to obtain a new solution
What are the changes that affect feasibility?
There are two types of changes that could affect feasibility of the current
solution:
Changes in resources availability (or right side of the
constraints or requirement vector b) and
Addition of new constraints.
What are the changes that affect optimality?
There are two particular situations that could affect optimality of the
current solution:
Changes in the original objective coefficients.
Addition of new economic activity (variable) to the model.
UNIT IV CLASSICAL OPTIMIZATION
THEORY
UNCONSTRAINED PROBLEMS
Unconstrained optimization problems are a class of optimization problems
where the goal is to find the maximum or minimum value of an objective
function without any restrictions or constraints on the decision variables. These
problems arise in various fields such as economics, engineering, and machine
learning, where a system or process needs to be optimized without predefined
boundaries or limitations.
Key Concepts
1. Objective Function:
o The function f(x) is typically continuous and differentiable to
enable the use of calculus-based methods.
o It may be convex for minimization or concave for maximization to
guarantee a unique global solution.
2. Gradient (∇f(x)):
4. Critical Points:
o A point x is called a critical point if ∇f(x)=0.
o Critical points can be classified into:
Local Minima: f(x) has a smaller value compared to nearby
points.
Local Maxima: f(x) has a larger value compared to nearby
points.
Saddle Points: f(x) changes behavior in different directions
(neither a maximum nor a minimum).
2. Newton-Raphson Method
Utilizes both gradient and Hessian for faster convergence.
Update rule: xk+1=xk−H(xk)−1∇f(xk)x_{k+1} = x_k - H(x_k)^{-1} \
nabla f(x_k)xk+1=xk−H(xk)−1∇f(xk)
Converges quadratically near the optimal solution.
3. Conjugate Gradient Method
Efficient for large-scale problems.
Minimizes f(x)f(x)f(x) along a sequence of conjugate directions.
4. Quasi-Newton Methods
Approximate the Hessian matrix instead of computing it directly.
Example: BFGS (Broyden–Fletcher–Goldfarb–Shanno) algorithm.
5. Line Search Methods
Iteratively search along a direction to find the step size that minimizes
f(x)f(x)f(x).
2. Unconstrained Optimization
3. Constrained Optimization
For optimization problems subject to equality or inequality constraints, the
conditions are extended using Lagrange multipliers and the Kuhn-Tucker
conditions.
A. Equality Constraints
B. Inequality Constraints (Kuhn-Tucker Conditions)
4. Examples
5. Applications
Engineering: Designing systems for minimum cost or maximum
efficiency.
Economics: Profit maximization or cost minimization under resource
constraints.
Machine Learning: Optimization algorithms like gradient descent rely
on these conditions.
Operations Research: Optimizing supply chains and resource allocation.
6. Conclusion
Necessary and sufficient conditions form the foundation of optimization theory.
They provide systematic methods to analyze and confirm whether a solution is
optimal, making them essential tools in a wide range of fields, from engineering
to economics and beyond. By understanding and applying these principles, one
can solve complex optimization problems with confidence.
NEWTON-RAPHSON METHOD
The Newton-Raphson method is a powerful and widely used iterative
technique for finding roots of equations or solving optimization problems. It is
particularly effective when seeking solutions to nonlinear equations, minimizing
or maximizing a function, or solving systems of equations.
1. Introduction
The Newton-Raphson method relies on using the tangent line at an initial guess
to approximate the root of a function. The method converges to the root by
iteratively refining the guess based on the slope of the tangent at the current
point.
It is named after Sir Isaac Newton and Joseph Raphson, who
independently developed this method.
It assumes the function is differentiable and uses both the function f(x)
and its derivative f′(x).
2. The Algorithm
For a given nonlinear equation f(x)=0, the Newton-Raphson method
approximates the root using the following iterative formula:
5. Example
6. Advantages
1. Fast Convergence:
o The Newton-Raphson method typically converges quadratically
near the root, making it faster than many other methods.
2. Simple Implementation:
o The iterative formula is straightforward to program and execute.
3. Widely Applicable:
o Works for solving a variety of equations and optimization
problems.
7. Disadvantages
1. Dependency on Initial Guess:
o Poor initial guesses can lead to slow convergence or divergence.
2. Derivative Requirement:
o The method requires the derivative f′(x), which may be difficult or
computationally expensive to calculate.
3. Not Suitable for Non-Differentiable Functions:
o The method assumes that the function is differentiable.
4. Convergence Issues:
o The method might not converge if the initial guess is far from the
root or if f′(x)=0 at some iteration.
8. Limitations
1. Multiple Roots:
o If a function has multiple roots, the method converges to the root
closest to the initial guess, which may not be the desired one.
2. Saddle Points:
o The method may fail at saddle points where f′(x) = 0 but f(x) ≠ 0
3. Complex Roots:
o The method is not inherently designed to handle complex roots.
4. Oscillation:
o If the derivative changes rapidly, the method may oscillate without
converging.
9. Applications
1. Root Finding:
o Solving equations in science and engineering.
o Example: Finding the square root of a number.
2. Optimization:
o Finding maxima or minima of functions in economics, machine
learning, and operations research.
3. Electrical Engineering:
o Solving circuit equations.
4. Physics:
o Solving nonlinear differential equations.
10. Conclusion
The Newton-Raphson method is a highly efficient and versatile technique for
solving equations and optimization problems. Despite its limitations, its speed
and simplicity make it a cornerstone of numerical analysis. With careful
selection of the initial guess and awareness of potential pitfalls, it can yield
accurate results for a wide range of problems.
CONSTRAINED PROBLEMS
EQUALITY CONSTRAINTS
INEQUALITY CONSTRAINTS
KUHN-TUCKER CONDITIONS
UNIT V QUEUING MODELS
Introduction, Queuing Theory, Operating characteristics of a Queuing system,
Constituents of a Queuing system, Service facility, Queue discipline, Single
channel models, multiple service channels.
Historical Background
Queuing theory was developed in the early 20th century, primarily to address
practical problems in telecommunication systems.
Agner Krarup Erlang, a Danish mathematician, is considered the
founder of queuing theory. He introduced the first mathematical model of
queues in 1909 while working for the Copenhagen Telephone Company.
Erlang’s work focused on optimizing the number of telephone lines to
handle fluctuating call volumes.
Since then, queuing theory has been extended to applications in various
domains, including manufacturing, transportation, healthcare, and IT systems.
4. Service Mechanism
The service mechanism describes how customers are served once they reach the
front of the queue.
Key Features of the Service Mechanism:
Service Facility:
o Refers to the resources or infrastructure that provide service, such
as bank tellers, servers in a data center, or checkout counters.
Service Channels:
o A service channel is a pathway through which a single customer is
served. Queuing systems may have:
Single Channel: Only one service channel is available.
Example: A small clinic with one doctor.
Multiple Channels: Several service channels operate
simultaneously. Example: A bank with multiple teller
counters.
Service Rate (μ):
o The average number of customers that can be served per unit of
time.
Service Time Distribution:
o Describes the variability in service times:
Constant Service Time: Service time is fixed and
predictable.
Exponential Service Time: Service time is random, often
modeled with an exponential distribution.
General Service Time: Service time follows a custom
distribution.
5. Departure Process
The departure process describes what happens after customers are served and
leave the system.
Key Aspects of Departure:
Feedback Loop:
o In some systems, customers may re-enter the queue after being
served. For example, a manufacturing system where a part needs
additional processing.
System Utilization:
o The fraction of time the service facility is actively serving
customers versus being idle. High utilization rates can cause longer
queues and waiting times.
6. Service Discipline
Service discipline determines the order in which customers are selected for
service from the queue.
Common Service Disciplines:
1. First-Come, First-Served (FCFS):
o Customers are served in the order they arrive.
o Example: A line at a fast-food counter.
2. Last-Come, First-Served (LCFS):
o The most recent arrival is served first.
o Example: A stack-based system where the newest task is handled
first.
3. Priority-Based:
o Customers are served based on priority levels.
o Example: Emergency cases in a hospital.
4. Shortest Processing Time (SPT):
o Customers requiring the least service time are served first.
o Example: A CPU scheduler handling small processes first.
5. Round Robin (RR):
o Each customer is served for a fixed time slice in a cyclic manner.
o Example: Time-sharing in computer operating systems.
Summary of Constituents
Constituent Description
The source of customers, finite or infinite, homogeneous or
Population
heterogeneous.
Describes how customers arrive, including arrival rate and
Arrival Process
behavior.
Queue The waiting line where customers wait for service.
Service Describes how customers are served, including service rate
Mechanism and service channels.
Departure
Represents what happens after customers are served.
Process
Service
The rules for selecting customers from the queue for service.
Discipline
Conclusion
The service facility is the backbone of any queuing system, dictating how
efficiently and effectively customers are served. By understanding the
components, performance metrics, and challenges associated with service
facilities, organizations can design systems that balance customer satisfaction,
cost-effectiveness, and operational efficiency.
QUEUE DISCIPLINE
Queue discipline refers to the set of rules or policies that determine the order in
which customers or entities are served in a queuing system. It plays a crucial
role in ensuring fairness, efficiency, and optimal resource utilization in service
systems. The choice of queue discipline directly impacts waiting times, system
performance, and customer satisfaction.