0% found this document useful (0 votes)
101 views9 pages

Scheduling and Sequencing in Operation Research

The document discusses scheduling and sequencing in operations research, covering single-server and multi-server models, along with various algorithms and performance measures. It also addresses deterministic and probabilistic inventory control models, emphasizing their assumptions, limitations, and applications. Additionally, geometric programming is introduced as a technique for solving nonlinear optimization problems, highlighting its importance and wide range of applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views9 pages

Scheduling and Sequencing in Operation Research

The document discusses scheduling and sequencing in operations research, covering single-server and multi-server models, along with various algorithms and performance measures. It also addresses deterministic and probabilistic inventory control models, emphasizing their assumptions, limitations, and applications. Additionally, geometric programming is introduced as a technique for solving nonlinear optimization problems, highlighting its importance and wide range of applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Scheduling and sequencing in operation research

Single-server models schedule a set of jobs on one machine, while multi-server models
schedule jobs on multiple machines

. The primary goal is to determine the optimal processing sequence and machine assignment
to meet objectives like minimizing total completion time or maximizing throughput.

Single-server scheduling and sequencing

This model involves scheduling

nn

jobs on a single machine to optimize a specific objective. All jobs are available at the start,
and the sequence of processing is the key decision.

• Sequencing rule: The method for determining the order in which jobs are processed.

• Performance measures: Metrics used to evaluate a schedule's effectiveness,


including:

o Makespan (

Cmaxcap C sub m a x end-sub

𝐶𝑚𝑎𝑥

): The completion time of the last job in the sequence. For a single machine, this is fixed, so
other metrics are typically used.

o Total completion time (

∑Cjsum of cap C sub j

𝐶𝑗

): The sum of the completion times for all jobs.

o Maximum lateness (

Lmaxcap L sub m a x end-sub


𝐿𝑚𝑎𝑥

): The maximum delay of any job relative to its due date.

o Number of tardy jobs (

∑Ujsum of cap U sub j

𝑈𝑗

): The number of jobs completed after their due date.

Common algorithms

• Earliest Due Date (EDD) Rule: This algorithm processes jobs based on their due
dates, from earliest to latest. It is optimal for minimizing the maximum lateness.

• Shortest Processing Time (SPT) Rule: This method sequences jobs in increasing
order of processing time. It is optimal for minimizing the total completion time and
mean flow time.
• Moore's Algorithm: This greedy algorithm is optimal for minimizing the number of
tardy jobs.
Multi-server scheduling and sequencing

Multi-server models involve allocating and sequencing jobs across multiple machines, either
in parallel or in a specific layout like a flow shop or job shop. The main challenge is finding
the best combination of machine assignments and job sequences.

Parallel machine scheduling


• Identical Parallel Machines: Jobs can be processed on any machine, with the same
processing time on each.

o Longest Processing Time (LPT) Rule: A common heuristic where the


longest jobs are scheduled first on the machines that become available. It is a
good approximation for minimizing makespan.

• Unrelated Parallel Machines: The processing time for a job varies depending on
which machine it is assigned to.

Flow shop scheduling

• In this model, jobs must be processed through a series of machines in a fixed order.

• Johnson's Algorithm: Provides an optimal solution for minimizing makespan in a


two-machine flow shop environment by sequencing jobs based on their processing
times on the two machines.
• Heuristics: For more than two machines, which is an NP-hard problem, heuristics
like the Palmer or CDS algorithms are often used to find near-optimal solutions.

Job shop scheduling

• This is the most general and complex shop scheduling model, where each job has a
unique processing path through a set of machines.

• Constraint Programming: This modeling approach uses constraints to represent


machine availability, job precedence, and other rules to find a feasible schedule.

• Metaheuristics: Algorithms such as Genetic Algorithms (GA) and Tabu Search are
used to find good, albeit not always optimal, solutions to these NP-hard problems.

Scheduling and sequencing with queueing theory

These models analyze performance in stochastic multi-server systems, where customer


arrivals and service times are variable.

• M/M/c Queue: A classic model with

cc

identical servers, Poisson arrivals, and exponentially distributed service times. Performance
measures include:

o Probability of all servers being busy.

o Average number of customers waiting in the queue.

• Fairness Algorithms: In multi-processor systems, algorithms like Round-Robin,


Priority Scheduling, and Fair-Share Scheduling are used to manage competing
processes and allocate CPU time fairly

Determinstic inventory models:

Deterministic inventory models in operations research assume known, constant, and


predictable values for all inventory-related variables, such as demand, lead times, and costs,
to determine optimal ordering policies and inventory levels. They are useful for businesses
with stable conditions and established processes, but less applicable where demand or other
factors are uncertain or volatile. Common models include the Economic Order
Quantity (EOQ) model and variations for manufacturing or deteriorating items.

Assumptions
These models are built on specific assumptions about the inventory system:
• Known and Constant Demand: Demand for the product is certain, continuous, and
does not change over time.

• Constant Lead Time: The time it takes to receive new inventory is fixed and
predictable.

• Instantaneous Replenishment: When an order is placed, the entire lot of items is


available immediately.

• No Uncertainty: There is no random variation or uncertainty in any of the system's


parameters.

• Economic Order Quantity (EOQ):

The most fundamental model, EOQ determines the optimal order quantity to minimize total
inventory costs, which are a balance between ordering costs and holding costs.

• Manufacturing Models:

These models consider a continuous production rate rather than instantaneous replenishment,
for situations where items are produced over a period rather than all at once.

• Models with Deteriorating Items:

In some models, the assumption of deterioration (perishable goods that degrade over time) is
incorporated to adjust inventory management strategies for products that have a limited shelf
life.

When to Use

Deterministic models are best suited for situations where:

• Demand is relatively stable and predictable.

• The planning horizon is fixed and well-defined.

• There is a clear understanding of all costs (setup, holding, etc.) and lead times.

Limitations
• Real-World Inapplicability:

The assumptions of constant demand and no uncertainty rarely hold true in reality, making
deterministic models less effective for businesses in dynamic or highly variable market
conditions.

• Risk of Over- or Under-Ordering:


Blindly applying these models without considering potential changes can lead to excess
inventory or stockouts if factors that were assumed to be stable begin to fluctuate.
Probobolatic inventory control models:

A probabilistic model in inventory management is a forecasting and control method that


accounts for uncertainty in factors like demand and lead time. Instead of using fixed values,
these models use probability distributions to represent variations, helping businesses decide
how much to order and when to reorder to balance holding costs, stockout risks, and
customer service levels.

• Uncertainty Acknowledged:

Unlike deterministic models, probabilistic models don't assume demand or lead times are
known with certainty.

• Probability Distributions:

They use probability distributions (e.g., normal curves) to model the randomness in demand
and lead time.

• Balancing Costs:

The models help determine optimal order quantities by considering the trade-offs between the
costs of overstocking (holding costs) and understocking (shortage costs).

• Stochastic Processes:

They leverage concepts from probability and statistics, including stochastic processes, to
analyze complex inventory scenarios.

How it Works

1. Data Analysis:

Probabilistic models use historical data or other factors to estimate the probability distribution
of future demand.

2. Cost Analysis:

The model quantifies the costs associated with having too much inventory (overage costs)
and not enough (underage costs).

3. Optimization:

By combining the probability distributions and cost analysis, the model determines an order
quantity that maximizes expected profit or minimizes expected costs.

4. Safety Stock:
The model's outputs often include a "safety factor" and a "safety stock" level, which are
buffer amounts of inventory held to protect against unexpected demand spikes during lead
time.

Types of Probabilistic Models

• Single-Period Models:

Used for items with a single selling opportunity, such as seasonal products or fresh goods
where demand decisions are made once for a specific period.

• Multi-Period Models:

Used for items with intermittent or continuous demand over multiple periods, allowing for
ongoing inventory management and replenishment decisions.

Benefits

• More Realistic:

The approach is more aligned with the reality of fluctuating demand, which is common in
most retail and manufacturing environments.

• Improved Decision-Making:

By quantifying uncertainty, these models provide a more robust foundation for inventory
decisions than deterministic approaches.

• Enhanced Reliability:

Using values within a statistical bandwidth can lead to more reliable inventory management
than relying solely on exact, deterministic figures.

Geometric Programming :
Geometric programming (GP) is an operations research (OR) technique for solving nonlinear
optimization problems where the objective function and constraints are posynomials (sums
of monomials). By transforming variables, GP problems can be converted into convex
problems, making them efficiently solvable. This technique is widely applied in various
fields, including engineering design (e.g., electronic component sizing, aircraft design),
chemical processes, and production planning, and it leverages the arithmetic-geometric mean
inequality to find optimal solutions.
How Geometric Programming Works

1. Problem Formulation: A standard geometric program involves minimizing or


maximizing a posynomial function subject to constraints, where both functions are
sums of monomials. A monomial is a function of the form c * x_1^a1 * x_2^a2 * ... *
x_n^an, where c is a positive constant and a_i are real exponents.

2. Convexification: The core principle of GP is its ability to transform a potentially non-


convex problem into a convex one through a change of variables.

3. Duality: The solution to the primal GP can be derived by solving a corresponding


dual problem, which often simplifies the process.

4. Applications: GP is used to find optimal design parameters in various practical


problems, such as minimizing power consumption in communication systems,
optimizing component sizes in integrated circuits, and designing chemical processes.

• Monomial:A function like x^a.

• Posynomial:A sum of monomials, for example, x^2 + 5y + 3z^0.5.


• Degree of Difficulty:In a geometric program, the number of variables minus the
number of independent constraints is known as the "degree of difficulty".

• Fuzzy Geometric Programming:An extension that handles problems with uncertain


or fuzzy parameters by converting them into crisp geometric programming problems
using uncertainty distributions.

Why Geometric Programming is Important in OR

• Handles Nonlinearity:It provides an efficient method for solving certain highly


nonlinear and non-convex problems that are difficult to address with other
optimization techniques.
• Convex Formulation:The ability to transform the problem into a convex form makes
it computationally efficient and guarantees a globally optimal solution.

• Wide Range of Applications:GP is a versatile tool applicable to a broad spectrum of


engineering, scientific, and economic problems.

You might also like