156
deposit_hagen Publikationsserver der Universitätsbibliothek Mathematik und Informatik Dissertation Oleh Sobeyko Integrated Process Planning and Scheduling in Flexible Job Shops

Integrated Process Planning and Scheduling in Flexible Job

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Integrated Process Planning and Scheduling in Flexible Job

deposit_hagenPublikationsserver der Universitätsbibliothek

Mathematik und Informatik

Dissertation

Oleh Sobeyko

Integrated Process Planning and Scheduling in Flexible Job Shops

Page 2: Integrated Process Planning and Scheduling in Flexible Job

Integrated Process Planning and

Scheduling in Flexible Job Shops

Dissertation

zur Erlangung des Grades eines

Doktors der Naturwissenschaften (Dr. rer. nat.)

Fakultat fur Mathematik und Informatik

der FernUniversitat in Hagen

vorgelegt von

Dipl.-Math. Oleh Sobeyko

Hagen, 2018

Page 3: Integrated Process Planning and Scheduling in Flexible Job
Page 4: Integrated Process Planning and Scheduling in Flexible Job

iii

Der Verfasser erklart an Eides statt, dass er die vorliegende Arbeit selbststandig, ohne

fremde Hilfe und ohne Benutzung anderer als die angegebenen Hilfsmittel angefertigt

hat. Die aus fremden Quellen (einschließlich elektronischer Quellen) direkt oder indirekt

ubernommenen Gedanken sind ausnahmslos als solche kenntlich gemacht. Die Arbeit ist

in gleicher oder ahnlicher Form oder auszugsweise im Rahmen einer anderen Prufung

noch nicht vorgelegt worden.

Hagen, 2018

Page 5: Integrated Process Planning and Scheduling in Flexible Job
Page 6: Integrated Process Planning and Scheduling in Flexible Job

UNIVERSITY OF HAGEN

Abstract

Faculty of Mathematics and Computer Science

Department of Computer Science

Integrated Process Planning and Scheduling in Flexible Job Shops

by Oleh Sobeyko

Process planning and scheduling are two important functions of real-life manufacturing

systems. The former function defines the set of processes to be used for manufacturing,

i.e., the set of operations needed to manufacture the products as well as the technological

precedence constraints between the operations. The latter function defines the temporal

assignment of the selected operations to the production resources. Traditionally, process

planning is carried out before scheduling and the selected process plans become fixed,

or a rather small amount of alternative process plans is provided. However, such an

approach can lead to low-quality or even infeasible solutions in dynamic production

environments. Hence, integrating process planning and scheduling is of a high practical

interest. In this work, we present an iterative decomposition approach for integrated

process planning and scheduling. Sophisticated heuristics are proposed on the process

planning and the scheduling levels. We benchmark the proposed solution approach

on a number of problem instances found in the related literature and generated by

ourselves. Furthermore, a mixed-integer programming model for the integrated problem

is developed. We assess the performance of the heuristics based on the results provided

by IBM CPLEX for small-scale problem instances. Next, we assess the performance

of the proposed approach in a dynamic manufacturing environment modeled using

simulation software. Understanding challenges of future manufacturing systems in view

of Industry 4.0, we describe on a high level how the developed solution approaches can

be incorporated into a multi-agent system by presenting different software agents and

the rules of inter-agent communication.

Page 7: Integrated Process Planning and Scheduling in Flexible Job
Page 8: Integrated Process Planning and Scheduling in Flexible Job

Acknowledgements

Since the beginning of this research, it has been a fascinating journey full of new chal-

lenges and discoveries, along with personal development. This research was accompanied

by many moments of frustration changing by the feeling of success in the end effect.

Finally, the result of numerous efforts, trials and errors, and sleepless nights is presented

in this manuscript.

This work has been done under a wise and strict guidance of my scientific supervisor

Prof. Dr. Lars Monch. As we know after Socrates, the truth is born in a dispute. The

collaboration and many discussions with Prof. Dr. Monch have been fruitful and made

this research as well as the achieved results possible. I deeply appreciate the valuable

advises of Prof. Dr. Monch and thank him for showing me the right way.

I would also like to express my appreciation to Prof. Dr. Stephane Dauzere-Peres for

kindly agreeing to act as a reviewer of this thesis.

I am grateful to Prof. Dr. Paolo Brandimarte for providing his set of benchmark

problem instances for flexible job shop scheduling with the total weighted tardiness

as optimization criterion. These problem instances have contributed to improving the

quality of this research.

Many thanks to my colleagues with whom I was lucky to work as a research assistant

at the University of Hagen. We sometimes had very interesting discussions on different

topics related to optimization. And the ideas coming out from these discussions have

been very helpful while implementing the developed solution approaches.

It can be sometimes underestimated how our involvement into the research process

impacts our close people. I sincerely thank my mother and my beloved for waiting

patiently for so much time, for their support and openness when I needed to talk to

them about the work, for being on my side. I also appreciate my friends for encouraging

me not to give up and to bring this work to an end.

Koln, January 2018

vii

Page 9: Integrated Process Planning and Scheduling in Flexible Job
Page 10: Integrated Process Planning and Scheduling in Flexible Job

Contents

Abstract v

Acknowledgements vii

Contents ix

List of Figures xi

List of Tables xiii

List of Abbreviations xvii

1 Introduction 1

2 Process Planning and Scheduling 5

2.1 Process Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.2 Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.3 Integration of Process Planning and Scheduling . . . . . . . . . . . . . . . 9

2.3.1 Discussion of Related Work . . . . . . . . . . . . . . . . . . . . . . 9

2.3.2 Requirements for an Integrated Approach . . . . . . . . . . . . . . 10

2.4 Problem Formulation and Analysis . . . . . . . . . . . . . . . . . . . . . . 11

2.4.1 Problem Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.4.2 MIP Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3 Heuristics for Scheduling Jobs in Flexible Job Shops 19

3.1 Problem and its Representation by a Graph . . . . . . . . . . . . . . . . . 19

3.2 Discussion of Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.3 Scheduling Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.3.1 List Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.3.2 Local Search Approach . . . . . . . . . . . . . . . . . . . . . . . . 27

3.3.3 Decomposition Techniques . . . . . . . . . . . . . . . . . . . . . . . 41

3.4 Computational Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.4.1 Design of Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.4.2 Measure of Relative Improvement . . . . . . . . . . . . . . . . . . . 50

3.4.3 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . 51

4 Heuristics for Integrated Process Planing and Scheduling 63

4.1 Problem and Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . 63

ix

Page 11: Integrated Process Planning and Scheduling in Flexible Job

Contents x

4.2 Iterative Scheme for IPPS . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

4.3 Algorithms for Process Planing . . . . . . . . . . . . . . . . . . . . . . . . 66

4.3.1 VNS Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

4.3.2 Memetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 70

4.4 Computational Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . 72

4.4.1 Reference Heuristic . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

4.4.2 Design of Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 72

4.4.3 Implementation Issues and Parameter Settings . . . . . . . . . . . 75

4.4.4 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . 76

5 Simulation-based Assessment of the Integrated Heuristic in a DynamicManufacturing Environment 85

5.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

5.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

5.3 Rolling Horizon Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

5.4 Simulation Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

5.4.1 Simulation Module . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

5.4.2 Control Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

5.4.3 Architecture of the Process Planning and Scheduling Module . . . 94

5.5 Computational Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . 96

5.5.1 Design of Simulation Experiments . . . . . . . . . . . . . . . . . . 96

5.5.2 Implementation Issues and Parameter Settings . . . . . . . . . . . 99

5.5.3 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . 101

6 Implementation Aspects in Form of a Multi-Agent System 103

6.1 Motivation for an Agent-based Approach in Process Planning andScheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

6.1.1 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

6.1.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

6.1.3 Holonic Manufacturing Systems . . . . . . . . . . . . . . . . . . . . 105

6.1.4 PROSA Reference Architecture . . . . . . . . . . . . . . . . . . . . 107

6.1.5 Multi-Agent Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 108

6.1.6 Holonic Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

6.2 Integration Aspects for the Developed Heuristics in Multi-Agent Systems 110

6.2.1 Agent Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

6.2.2 Factory Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

6.2.3 Resource Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

6.2.4 Product Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

6.2.5 Order Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

6.2.6 Process Planner Agent . . . . . . . . . . . . . . . . . . . . . . . . . 117

6.2.7 Scheduler Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

6.2.8 Multi-Agent System Framework . . . . . . . . . . . . . . . . . . . 120

7 Conclusions and Future Research Directions 121

Bibliography 127

Page 12: Integrated Process Planning and Scheduling in Flexible Job

List of Figures

2.1 Different BOMs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.1 An example of the disjunctive graph for two orders O11 and O21. . . . . . 213.2 List scheduling in case of unrelated machines. . . . . . . . . . . . . . . . . 253.3 Transitions by the neighborhood structure NSLS . . . . . . . . . . . . . . . 293.4 An example of the solution graph G before a transition. . . . . . . . . . . 333.5 An example of the solution graph G′ after a transition. . . . . . . . . . . . 333.6 Affected area in σ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.7 Corrected topological ordering σ′ after DTO. . . . . . . . . . . . . . . . . 353.8 Improvements δ depending on the due date tightness g for Set 2. . . . . . 533.9 Improvements δ depending on the due date tightness g for Set 3. . . . . . 543.10 Improvements δ for Set 4a depending on the flexibility level. . . . . . . . 563.11 Improvements δ for Set 4b depending on the flexibility level. . . . . . . . 583.12 Improvements δ for Set 4b and g = 0.75. . . . . . . . . . . . . . . . . . . 60

4.1 Iterative decomposition scheme for IPPS. . . . . . . . . . . . . . . . . . . 654.2 Solution quality of TPSRVNS for Set 3/small and different strategies. . 784.3 Computing times of TPSRVNS for Set 3/small and different strategies. . 794.4 Performance of the neighborhood structures for Set 3/moderate and S3. 794.5 Performance of different scheduling strategies for Set 3/moderate. . . . . 804.6 Computing times of different scheduling strategies for Set 3/moderate. . 804.7 Impact of neighborhood structures and scheduling strategies on improve-

ments (itermax = 250). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814.8 Impact of neighborhood structures and scheduling strategies on computing

times (itermax = 250). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814.9 Impact of neighborhood structures and scheduling strategies on improve-

ments for Set 3/large (itermax = 500). . . . . . . . . . . . . . . . . . . . 824.10 Impact of neighborhood structures and scheduling strategies on computing

times for Set 3/large (itermax = 500). . . . . . . . . . . . . . . . . . . . 824.11 Impact of parallelisation on the improvements by TPSRVNS. . . . . . . . 834.12 Impact of parallelisation on the computing times by TPSRVNS. . . . . . . 83

5.1 Rolling horizon time decomposition for IPPS. . . . . . . . . . . . . . . . . 905.2 Simulation framework. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905.3 Solution architecture of the decomposed IPPS approach. . . . . . . . . . . 955.4 The change of the entire process plan is allowed. . . . . . . . . . . . . . . 975.5 Only the change of subpart routing is allowed. . . . . . . . . . . . . . . . 975.6 Estimated time t when the machine M will become available. . . . . . . . 98

6.1 General architecture of a holon. . . . . . . . . . . . . . . . . . . . . . . . . 1066.2 General scheme of an HMS. . . . . . . . . . . . . . . . . . . . . . . . . . . 1076.3 Holonic agent architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . 110

xi

Page 13: Integrated Process Planning and Scheduling in Flexible Job
Page 14: Integrated Process Planning and Scheduling in Flexible Job

List of Tables

3.1 Main characteristics of the problem instances for FJS. . . . . . . . . . . . 47

3.2 TWT values for Set 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

3.3 Overall improvements δ for Set 2. . . . . . . . . . . . . . . . . . . . . . . 54

3.4 Overall improvements δ for Set 3. . . . . . . . . . . . . . . . . . . . . . . 54

3.5 Improvements δ depending on g for Set 4a. . . . . . . . . . . . . . . . . . 57

3.6 Overall improvements δ for Set 4. . . . . . . . . . . . . . . . . . . . . . . 57

3.7 Average computing time per problem instance for Set 4a. . . . . . . . . . 57

3.8 Improvements δ for Set 4b depending on g. . . . . . . . . . . . . . . . . . 59

3.9 Performance of LS, VNSG, and SBH-LS depending on the computing time. 60

4.1 Scheduling strategies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

4.2 Main characteristics of the problem instances for IPPS. . . . . . . . . . . 73

4.3 Cmax results for Set 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

4.4 Improvements δ for Set 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

4.5 Performance of TPSRVNS for Set 2 depending on the due date tightness. 77

4.6 Improvements δ depending on the due date setting for Set 3/large. . . . 84

4.7 Improvements δ depending on the ready time setting for Set 4. . . . . . . 84

5.1 Simulation scenarios for FJS. . . . . . . . . . . . . . . . . . . . . . . . . . 98

5.2 Simulation scenarios for IPPS. . . . . . . . . . . . . . . . . . . . . . . . . 98

5.4 Simulation results for FJS scheduling. . . . . . . . . . . . . . . . . . . . . 101

5.5 Improvements δ relative to Zfjs3 for IPPS. . . . . . . . . . . . . . . . . . . 101

6.1 Characteristics of the factory agent. . . . . . . . . . . . . . . . . . . . . . 111

6.2 Characteristics of resource agents. . . . . . . . . . . . . . . . . . . . . . . 113

6.3 Characteristics of product agents. . . . . . . . . . . . . . . . . . . . . . . . 114

6.4 Characteristics of order agents. . . . . . . . . . . . . . . . . . . . . . . . . 116

6.5 Characteristics of the process planner agent. . . . . . . . . . . . . . . . . . 118

6.6 Characteristics of the scheduler agent. . . . . . . . . . . . . . . . . . . . . 119

xiii

Page 15: Integrated Process Planning and Scheduling in Flexible Job
Page 16: Integrated Process Planning and Scheduling in Flexible Job

List of Algorithms

3.1 Generation of δik. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.2 Updating heads and completion times of the operations. . . . . . . . . . . 38

3.3 Iterative LS algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.4 Basic structure of SBH. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.5 Reoptimization procedure in SBH. . . . . . . . . . . . . . . . . . . . . . . 43

3.6 Skewed VNS for SSP solution. . . . . . . . . . . . . . . . . . . . . . . . . . 46

4.1 SRVNS Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

4.2 TPSRVNS Scheme. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

4.3 GA for IPPS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

5.1 Rolling Horizon Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

xv

Page 17: Integrated Process Planning and Scheduling in Flexible Job
Page 18: Integrated Process Planning and Scheduling in Flexible Job

List of Abbreviations

ACO Ant colony optimization

AI Artificial intelligence

AMS Agent management system

ATC Apparent tardiness cost

BFS Breadth-first search

BOM Bill of materials

CAD Computer-aided design

CAPP Computer-aided process planning

CLPP Closed loop process planning

CPU Central processing unit

CS Combined scheduler

DAG Directed acyclic graph

DPP Distributed process planning

DTO Dynamic topological ordering

EDD Earliest due date

FIFO First-in, first-out

FJS Flexible job shop

GA Genetic algorithm

GALS GA with LS

GPU Graphics processing unit

HMS Holonic manufacturing system

IPPS Integrated process planning and scheduling

LS Local search

xvii

Page 19: Integrated Process Planning and Scheduling in Flexible Job

List of Abbreviations xviii

MAS Multi-agent system

MIP Mixed-integer programming

MOD Modified operation due date

NLPP Non-linear process planning

NP-hard Non-deterministic polynomial-time hard

ODD Operation due date

PROSA Product-resource-order-staff architecture

RAM Random access memory

RPT Raw processing time

SA Simulated annealing

SBH Shifting bottleneck heuristic

SDR Simple dispatching rules

SPT Shortest processing time

SRVNS Skewed reduced VNS

SSP Scheduling subproblem

TCP/IP Transmission control protocol/internet protocol

TPSRVNS Tailored parallelized SRVNS

TS Tabu search

TT Total tardiness

TWT Total weighted tardiness

VNS Variable neighborhood search

VNSG Global VNS

WFIFO Weighted FIFO

WMOD Weighted MOD

WODD Weighted ODD

WSPT Weighted SPT

XML Extended markup language

Page 20: Integrated Process Planning and Scheduling in Flexible Job

Chapter 1

Introduction

The purpose of process planning is to determine a set of operations which has to be

executed to manufacture a product. It specifies the precedence constraints between

the operations and the routing during the production (cf. Khoshnevis and Chen 1991,

Park and Choi 2006). Scheduling, however, deals with assigning the available resources

to the operations specified in a process plan to optimize performance measures of interest.

It results in a schedule, i.e., the start times of the operations on the machines are

determined as well as the sequences of the operations. Process planning is often carried

out before scheduling. A common assumption of process planning is that the production

resources have infinite capacity and that they are available (cf. Huang et al. 1995). In

manufacturing systems, however, the availability of resources typically changes over

time. There is often a delay between the process planning and scheduling phases that

results in infeasible process plans, i.e., the ones not possible to execute, and requires

modifying up to 30% of the existing process plans (cf. Li et al. 2010b). Hence, integrated

process planning and scheduling (IPPS) functionality is desirable. Product development

and manufacturing will be closely integrated according to the Industry 4.0 vision (cf.

Lasi et al. 2014). IPPS aims to avoid process plans which cannot be executed by making

the decisions in such a way that the result of the schedule execution is anticipated.

Because IPPS includes scheduling as a subproblem, most of the IPPS problems are

NP-hard.

We are interested in a specific IPPS problem where each product has a set of alternative

bills of materials (BOMs). For each subpart of a BOM, a set of alternative routes are

given. Real-world problems in the food, chemical, or self-adhesive plastic foils industries

belong to this class of problems. We make use of an iterative decomposition approach

which allows for solving the integrated problem on two levels (cf. Kim and Egbelu 1999)

and develop sophisticated heuristics for each of them. When a BOM and a route for

each product and for each subpart are chosen, we obtain a flexible job shop (FJS)

scheduling problem. Because a high on-time delivery performance is crucial in the

today’s fierce competition of companies, we consider the total weighted tardiness (TWT)

1

Page 21: Integrated Process Planning and Scheduling in Flexible Job

Chapter 1. Introduction 2

as performance measure. The proposed heuristic uses variable neighborhood search

(VNS) techniques on the process planning level to select BOMs and routes. Each

move of the VNS approach requires solving a large-scale FJS problem instance. We

apply efficient iterative heuristics based on the local search (LS) approach proposed by

Sobeyko and Monch (2016) for this purpose. Note that our approach allows for solving

the process planning and the resulting scheduling problems on separate computers.

The proposed methods for IPPS are described in (Sobeyko and Monch 2017). We show

by carrying out computational experiments that we are able to outperform a genetic

algorithm (GA)-type approach from the literature that simultaneously makes process

planning and scheduling decisions.

To the best of our knowledge, only a few problems instances for IPPS dealing with

TWT can be found in the literature (cf. Sobeyko and Monch 2017). Hence, we present

several sets of problem instances which are used to benchmark the proposed algorithms.

These problem instances both for FJS and IPPS problems are publicly available under

http://p2schedgen.fernuni-hagen.de/index.php?id=174.

Recent researches such as (Monch and Rose 2004, Monch and Drießel 2005, as well as

Sourirajan and Uzsoy 2007) suggest that the performance of deterministic scheduling

approaches has to be assessed in dynamic and stochastic manufacturing environments.

This approach allows for a better assessment of the developed methods in practice.

Therefore, we present a simulation framework which is adapted to deal with IPPS

problems. We apply the rolling horizon approach to combine the simulation and the

IPPS heuristics (cf. Sethi and Sorger 1991, Pinedo 2008). The simulation models for this

purpose can be generated based on the static problems instances by adding information

about order inter-arrivals.

In view of Industry 4.0 the increasing complexity of future manufacturing systems along

with the increasing range of more complex products and decreasing lot sizes require a

new paradigm of production control. Next generation information systems are needed

to shorten production cycles, to reduce time-to-marked, and to react dynamically on

demand changes. Recent advances in IT suggest that multi-agent systems (MAS) have

to become a proper technology for developing modern production control systems (cf.

Monch and Stehli 2006). According to Deen (2003), MAS can be considered as a part

of holonic manufacturing systems (HMS). Such systems are based on principles of

autonomy, self-organization, and collaboration to reach a common goal. With this in

mind, we discuss on a high level implementation aspects for integrating the proposed

IPPS solution approaches into a MAS. Note that the modular agent-based solution

architecture presented in Subsection 5.4.3 enables, in principle, the implementation of

the approach by means of MAS. Based on the product-resource-order-staff architecture

(PROSA) for holonic manufacturing (cf. Van Brussel et al. 1998), we present a number

of different agents required for our production control system. We also highlight the

Page 22: Integrated Process Planning and Scheduling in Flexible Job

Chapter 1. Introduction 3

reasoning parts of the agents and define how the agents have to interact with each

other.

This work is organised as follows. We introduce the IPPS problem in Chapter 2.

First, we discuss the related literature in Subsection 2.3.1 and formulate requirements

for an integrated approach in Subsection 2.3.2. The integrated problem is formulated

in Subsection 2.4.1, and an appropriate mixed-integer programming (MIP) model is

developed in Subsection 2.4.2.

Next, we consider approaches for solving FJS scheduling problems in Chapter 3. The

representation of the FJS problems, formulated when assessing different process plans,

in form of disjunctive graph is described in Section 3.1. We discuss the related literature

in Section 3.2. In Section 3.3, we investigate different suitable scheduling approaches,

especially an efficient LS heuristic (see Subsection 3.3.2). A computational study for

FJS problems based on different sets of problem instances is described in Section 3.4.

Heuristics for IPPS are presented in Chapter 4. Specific literature on methods for IPPS

is discussed in Section 4.1. We then describe an iterative decomposition approach for the

integrated problem in Section 4.2. Algorithms used for process planning are developed in

Section 4.3. A memetic algorithm from the literature used for performance comparison

purposes is briefly described in Subsection 4.3.2. Finally, the computational experiments

demonstrating the benefits of the proposed integrated approach are described in

Section 4.4.

After benchmarking the heuristic for IPPS based on static problem instances, we inves-

tigate the performance of the proposed approach in a simulated dynamic manufacturing

environment in Chapter 5. First, we state the problem in Section 5.1 and discuss the

related literature in Section 5.2. Then we describe the rolling horizon setup in Section 5.3

and present the simulation framework for emulating the real production environment in

Section 5.4. Simulation experiments showing the benefits of integrating process planning

and scheduling in a dynamic production environment are presented in Section 5.5.

Different aspects of implementing the proposed heuristics by means of a MAS are

discussed in Chapter 6. The motivation for using the MAS technology is given in

Section 6.1 whereas several aspects of integrating the heuristics from Chapter 3 and

Chapter 4 are discussed in Section 6.2. We describe a set of relevant agents and how

they interact with each other.

Final conclusions and possible future research directions are presented in Chapter 7.

Page 23: Integrated Process Planning and Scheduling in Flexible Job
Page 24: Integrated Process Planning and Scheduling in Flexible Job

Chapter 2

Process Planning and Scheduling

Process planning and scheduling are two important functions of manufacturing systems.

While process planning determines which operations have to be executed to manufacture

certain products, the purpose of scheduling is to determine how these operations have to

be assigned to the manufacturing resources over time. We start this chapter by describing

some aspects of process planning in Section 2.1. In Section 2.2, we give an overview of

the relevant scheduling approaches. Next, we consider the topic of integrating process

planning and scheduling in Section 2.3. We discuss the related work in Subsection 2.3.1

and formulate requirements for an integrated approach in Subsection 2.3.2. Finally, we

state the considered integrated problem in Subsection 2.4.1 and develop an appropriate

MIP model in Subsection 2.4.2.

2.1 Process Planning

Process planning is used to determine the set of operations which have to be

executed to manufacture a product. In this sense, process planning defines the

manufacturing design of products. Except the operation set, process planning also

specifies the technological precedence constraints between the operations and the routing

during the manufacturing process (cf. Khoshnevis and Chen 1991, Park and Choi 2006).

Computer-aided process planning (CAPP) systems play a key role when integrating

design and manufacturing systems properly based on the available resources and

design constraints (cf. Ahmad et al. 2001). A comprehensive overview of different CAPP

approaches is given by Alting and Zhang (1989). According to this work, process

planning can be defined as ”the systematic determination of methods by which a product

is to be manufactured economically and competitively” and consists of the following

steps:

• interpretation of product design data;

• selection of machining processes;

5

Page 25: Integrated Process Planning and Scheduling in Flexible Job

Chapter 2. Process planning and scheduling 6

• selection of machine tools;

• determination of fixtures and surfaces for measurements;

• sequencing the operations;

• selection of inspection devices;

• determination of production tolerances;

• determination of proper cutting conditions;

• calculation of the overall times;

• generating of process sheets including NC data.

According to Marri et al. (1998), usually there exist several alternative feasible process

plans for manufacturing the same product or product’s subparts. We point out that

the process planning approach described in Section 2.3 assumes that there is already a

pre-generated list of BOMs and subpart routes which can be feasibly combined and used

in the manufacturing system.

Investigations show that efficient CAPP systems can reduce manufacturing costs and

times up to 30% and 50%, respectively. At the same time, frequent changes in the

production environment along with the need to reduce manufacturing times in complex

manufacturing systems require the support of process planning systems capable of

analyzing manufacturing capacities and rapidly decide on the best possible product

design to use. Another challenge is the fact that in some industrialized countries such as

the USA and the UK, there is a lack of skilled process planners (cf. Marri et al. 1998).

Thus, more industrial companies acquired CAPP to support integration of design and

production.

There are three general approaches to CAPP: variant, generative, and semi-generative.

Variant process planning assumes that similar products require similar plans. It is based

on the concept of product families. Thus, creating a new process plan is based on

certain modifications of the existing plans. However, this approach has difficulties in

accommodating different combinations of geometry, size, precision, material, quality and

floor shop loading. Generative approaches are able to create new process plans based on

input about the orders/subparts. These systems can perform analysis and make decisions

on their own. However, such systems are much more complex and difficult to develop.

The semi-generative approach combines features of the previous two approaches. A pre-

process plan is developed and modified before the plan is used in a real production

environment. The process plan can be examined and modified if it does not fit or is not

feasible. Note that semi-generative approaches are considered as a temporary solution

until the purely generative approaches become mature and advanced enough.

CAPP systems can be combined with computer-aided design (CAD) systems. They

analyze the data from CAD and generate process plans based on the CAD model,

Page 26: Integrated Process Planning and Scheduling in Flexible Job

Chapter 2. Process planning and scheduling 7

possibly resulting from more simple models. Knowledge about the required manufac-

turing processes is taken into account. Full-featured CAPP systems which employ CAD,

knowledge-based or expert systems are beyond the scope of this research.

More recent developments in the domain of process planning use advantages of

artificial intelligence (AI). Knowledge-based systems or expert systems together with the

generative approach are used to prototype CAPP systems. An example is a knowledge-

based engineering approach for process planning for automotive panels proposed by

Tsai et al. (2010). This approach automates process planning by creating new designs

of the panels.

Optimization techniques in CAPP gained a lot of attention recently. Optimal process

plans are generated either on the basis of time, or cost, or a weighted combination of

these factors. Among the most popular methods are branch-and-bound, direct search,

gradient search, dynamic programming, GAs. As stated in the review of assembly process

planning techniques by Wang et al. (2009), heuristic approaches like simulated annealing

(SA), GAs, ant colony optimization (ACO) are among the most frequently used ones

in the field of process planning. It is commonly believed that collaborative approaches

based on multi-agent technology will be employed in future process planning systems.

In this work, we develop a VNS-based algorithm for process planning. We define proper

neighborhood structures. Some of them are able to consider information about the status

of the shop floor to a certain extent. This algorithm is able to search the solution space

more efficiently by considering several process plans in parallel.

2.2 Scheduling

Scheduling in FJS is of a high practical interest since parallel machines are typical

for many manufacturing environments. Semiconductor wafer fabrication facilities (wafer

fabs), for instance, can be modeled as FJS with a number of additional constraints (cf.

Monch et al. 2011). Other examples are factories producing corrugated board products

as well as factories producing self-adhesive tapes etc.

One of the main properties of such systems is that jobs can be processed on parallel

machines, i.e, machines that offer the same functionality. The machines can be organized

in groups. We assume that all machines within any machine group have the same

functionality and the machine groups have disjoint functionality. This implies that there

are no two different machine groups which can perform the same operation. According

to Pinedo (2008), we consider two types of machine groups: groups of identical machines

denoted by Pm, and groups of unrelated machines denoted by Rm.

One of the main contributions of this work is an efficient iterative LS approach for

scheduling in FJS with the TWT performance measure. The heuristic is based on

Page 27: Integrated Process Planning and Scheduling in Flexible Job

Chapter 2. Process planning and scheduling 8

the disjunctive graph representation and the neighborhood search approach. When

applying such techniques, the evaluation of moves is costly in case of due date-

oriented performance measures because it requires calculating the longest paths from

the source node to the nodes associated with the last operation of each job (cf.

Kreipl 2000). Mati et al. (2011) propose an efficient approach to evaluate moves for job

shop scheduling problems with an arbitrary regular criterion. Garcia-Leon et al. (2015)

propose an LS algorithm to optimize regular criteria in FJS scheduling problems.

A tabu thresholding-based algorithm with improvement and diversification moves is

applied. Two neighborhood structures are proposed for changing operation-to-machine

assignment and operation sequencing. An efficient method to check the feasibility

of moves is proposed. A lower bound approach is applied for fast evaluation of the

resulting solutions. Makespan is the objective function of interest for FJS scheduling,

whereas the TWT is considered in the context of classical job shops without alternative

machines. This motivates us to develop an efficient LS approach which extends the LS

heuristic by Mati et al. (2011) to the case of FJS with the TWT objective function. In

addition, we consider orders with non-linear structure described by their BOMs. The

neighborhood structures proposed by Dauzere-Peres and Paulli (1997) are applied in

the present situation. An evaluation scheme for moves based on a dynamic topological

ordering (DTO) of the disjunctive graph is proposed. Note that this approach allows

to obtain the resulting value of the objective function efficiently. Escaping from local

optima is ensured using the SA acceptance criterion (cf. Kirkpatrick et al. 1983).

We consider the shifting bottleneck heuristic (SBH) which decomposes the consid-

ered scheduling problem into a series of smaller scheduling subproblems (SSP). We

hybridize the SBH with the proposed LS and the VNS approaches. In addition,

list scheduling techniques for a variety of due date-oriented dispatching rules are

discussed. The proposed heuristics are compared based on computational experiments

for problem instances available in the literature. We point out that most of the

problems instances from the literature consider makespan as an optimization criterion

(cf. Fattahi et al. 2007, Hurink et al. 1994, Pacino and Van Hentenryck 2011). Since we

are interested in developing scheduling approaches with the TWT performance measure,

we extend some of the standard instances with relevant information. We also present a

set of new large-size problem instances. While it seems that different sets of benchmark

instances for FJS scheduling problems with makespan criterion are publicly available

(cf. Behnke and Geiger 2012), to the best of our knowledge this is not the case when the

TWT criterion is applied. Therefore, we generate such problem instances and use them

in our computational experiments.

The LS scheme is able to determine high-quality solutions within a short amount

of computing time. If the processing flexibility increases, i.e., the number of parallel

machines grows, the improvement of the advanced techniques compared to the list

scheduling techniques decreases. In case of identical machines, the SBH-type algorithms

Page 28: Integrated Process Planning and Scheduling in Flexible Job

Chapter 2. Process planning and scheduling 9

outperform the LS scheme with respect to the TWT. However, the SBH requires more

computing time.

The proposed heuristics as well as the computational results presented in this work are

published in (Sobeyko and Monch 2016).

2.3 Integration of Process Planning and Scheduling

Process planning and production scheduling are two very important functions in the

modern manufacturing systems (cf. Tan and Khoshnevis 2000, Park and Choi 2006,

Leung et al. 2010). The process planning function determines how a product has to be

manufactured according to its design specifications. In the most general case, it defines

a set of operations (a process plan) which have to be performed to finish some part or

the complete product. Given a process plan, the manufacturing resources have to be

assigned to the selected operations. Afterwards, the operations have to be scheduled on

the allocated resources so that the considered performance criterion is optimized.

During process planning, it is usually assumed that the underlying resources are

not limited in view of capacity and that they are idle (cf. Huang et al. 1995). In a

dynamic manufacturing environment, the availability of resources can change over time.

Additionally, there are capacitive constraints which have to be considered during the

scheduling process. Another problem is the time delay between the process planning

and the scheduling phases. Because of the dynamic behavior of the manufacturing

environment, the constraints which are used for generating the process plan can become

outdated. To react appropriately in such situations, the process planning and the

scheduling functions have to be integrated. IPPS aims to eliminate the infeasibilities

and to take into account the available production resources and the current shop floor

status during the process planning, as well as to generate an appropriate high-quality

schedule in view of the performance criterion.

In this work, we present a new VNS-based iterative approach for IPPS in manufacturing

systems which can produce the same products in several different ways. Relevant

neighborhood structures on the process planning level are proposed. Parallelization of

the VNS as well as different scheduling strategies are discussed. The proposed solution

algorithm is benchmarked against a GA found in the related literature. The IPPS

problem is also formulated using a MIP model. New problem instances for benchmarking

different solution approaches are presented.

2.3.1 Discussion of Related Work

In this subsection, we review related work for IPPS with respect to problem setting

and solution approaches. Hutchinson and Pflughoeft (1994) discuss different types of

Page 29: Integrated Process Planning and Scheduling in Flexible Job

Chapter 2. Process planning and scheduling 10

manufacturing flexibility. Process plan flexibility (see Subsection 2.3.2) is one of

them. Balakrishnan and Geunes (2000) propose to use alternative assembly structures

in production planning, while job shops with alternative routes are researched by

Brandimarte and Calderini (1995). Surveys for IPPS are provided in the works of

Tan and Khoshnevis (2000), Li et al. (2010a), Phanden et al. (2011). Three approaches

for IPPS are differentiated with respect to how information is taken into account

during decision-making, namely non-linear process planning (NLPP), closed loop process

planning (CLPP), and distributed process planning (DPP). NLPP is based on a

static shop floor situation (cf. Zhang and Merchant 1993). A set of process plans is

generated in the NLPP approach for each product. A priority is assigned to each

of these plans. A process plan with the highest priority is selected first. When it

is not appropriate for the current shop floor status, the plan that is prioritized as

the second one is chosen. This procedure can be repeated. Alternative process plans

and precedence relations can be represented in different ways. Petri nets can be

used (cf. Kis et al. 2000, Capek et al. 2012) for this purpose as well as network-type

representations (cf. Shao et al. 2009), or representation in form of AND/OR graphs (cf.

Lihong and Shengping 2012). However, we avoid integrated representations, because it

is easier to change separate sets of BOMs and routes in the case of new products in a

dynamic setting. A feedback mechanism based on the current shop floor status is crucial

for the CLPP approach (cf. Zhang and Merchant 1993). In contrast to NLPP, new

process plans are generated dynamically. Finally, DPP performs both process planning

and scheduling tasks simultaneously (cf. Huang et al. 1995). It consists of the initial

planning, matching planning and final planning phases. In the first phase, primary

process plans and a schedule are determined. This phase is performed off-line. The

process plans and the processing capabilities of the machines are matched to adjust the

plans to the shop floor status in the second phase. In the final phase, suitable process

plans and final schedules are derived dynamically in real-time as soon as a product has

to be manufactured.

2.3.2 Requirements for an Integrated Approach

We formulate some requirements for the solution approach presented later in this work.

This gives us a more clear picture about the solution’s design.

The system for IPPS has to have the following properties:

1. Development of sophisticated methods on both process planning and scheduling

levels has to be possible.

2. The implemented process planning and/or scheduling software system has to be

logically and physically decoupled.

Page 30: Integrated Process Planning and Scheduling in Flexible Job

Chapter 2. Process planning and scheduling 11

3. At the same time, the algorithms have to share the same common data model for

efficient information exchange.

4. Decoupled maintenance of the the data layer (see Chapter 5) and the process

planning and scheduling algorithms has to be possible.

5. It has to be possible to implement the process planning and scheduling module

using a different software technology than for the data layer.

6. The integrated approach has to foresee the possibility to support parallelization.

7. It is required that the performance of the system is acceptable in view of computing

times.

8. The system has to be easily maintainable.

9. The effort for implementing new solution methods has to be limited.

10. The implemented algorithms have to be easily interchangeable or exchangeable

following the ”plug-and-play” approach.

11. Possibility to modify the system during the run time is desired.

12. It has to be possible to consider the dynamically changing status of the

manufacturing resources.

13. There has to be a possibility to execute the process planning and scheduling module

as well as the data layer on different nodes in a network using dedicated server

hardware.

To support achieving most of these requirement, we propose the modular solution

architecture described in Subsection 5.4.3.

2.4 Problem Formulation and Analysis

2.4.1 Problem Setting

We start by describing the machine environment. Production systems can have

different layouts and be of different types depending on the manufactured products

(cf. Pinedo 2008). In practical applications, FJS environments play an important

role (cf. Pinedo 2008, Scrich et al. 2004, Brandimarte 1993). In such a machining

environment, there are multiple machines which are able to execute the same set of

operations, thus increasing the flexibility of the manufacturing process. According to

Hutchinson and Pflughoeft (1994), this flexibility is called machine flexibility. It suggests

that each machine specifies a set of operation codes which it is capable to process. Based

Page 31: Integrated Process Planning and Scheduling in Flexible Job

Chapter 2. Process planning and scheduling 12

on some criteria, such as the smallest processing time, it can be decided which of the

alternative machines these operations have to be assigned to.

We consider a processing environment consisting of a number of different machine groups

M ={M1, ...,M|M|

}. (2.1)

Each machine group Mi, i = 1, 2, ..., contains a predetermined number of alternative

machines that offer the same functionality. The machine groups can consist of identical

or unrelated machines. According to Pinedo (2008), such machine groups are denoted

by Pm and Rm, respectively. We assume that all machine groups inM are of the same

type, i.e., either all machine groups are of type Pm or they are of type Rm. For the Rm

environment, we do not consider Pm as its special case.

Let O(Mi) denote the set of operations which any of the machines from Mi is capable

to perform, i = 1, 2, ... We assume that the machine groups have pairwise disjoint

functionality, i.e., no two machines from different machine groups can perform the same

operations:

O(Mi) ∩ O(Mj) = ∅, ∀i, j : i 6= j. (2.2)

In case the machine groups are of type Rm, the processing time of an operation can

vary depending on the combination of the selected machine and the operation itself.

We assume that all machines are available at time zero and no breakdowns are

considered. No preemption is possible if a machine is processing an operation. The

processing times are deterministic.

We continue by describing the structure of the considered products. We denote the

products by P1, ..., Pr. A BOM describes the set of subparts Di which the corresponding

product consist of. BOMs specify technological precedence constraints between the

subparts. We consider BOMs with convergent structure. Each BOM is represented by

a directed acyclic graph (DAG). An example is depicted in Figure 2.1. Product P can

be assembled from subparts D1 and D2 in Figure 2.1(a), while subpart D2 is assembled

from D1 and D4. Subpart D2 can be substituted by subpart D5 such that the resulting

product P has the same characteristics. This is reflected in Figure 2.1(b).

Any product Pi can have alternative BOMs Biz assigned to it. We assume that regardless

of the selected BOM, the resulting product has the same characteristics. Considering

multiple BOMs for the same product is reasonable because it is possible that several

subparts can be supplied by different manufacturers. We assume that all orders of any

product Pi are manufactured using the same BOM.

Each subpart of a BOM is characterized by its route. A route is a sequence of operations

that have to be performed consecutively to produce the subpart. Each operation has

Page 32: Integrated Process Planning and Scheduling in Flexible Job

Chapter 2. Process planning and scheduling 13

(a) BOM with four subparts. (b) BOM with two subparts.

Figure 2.1: Different BOMs.

to be performed on one of the machines from the machine group assigned to it. Each

subpart can have alternative routes. For the sake of simplicity, we assume that all routes

corresponding to a subpart are permutations of the same set of operations. This enables,

for example, an easier estimation of the processing time required to manufacture a

subpart. Each BOM can contain several subparts of the same type (see Figure 2.1). It

is not required that all these subparts follow the same route.

The s-th order of product Pi is denoted by Ois. Order Ois has a ready time ris, a weight

wis, and a due date dis. The completion time of Ois is denoted by Cis. We are interested

in minimizing the TWT objective function given by:

TWT =∑i,s

wisTis, (2.3)

where Tis = (Cis − dis)+ is the tardiness of Ois. We set x+ = max (x, 0) for abbreviation.

The following decisions have to be made to solve the problem:

1. For each product, a BOM has to be selected.

2. One of the alternative routes for each of the subparts within the selected BOM

has to be chosen.

3. The operations resulting from the decisions made in Step 1 and Step 2 have to be

scheduled on the machines.

Note that the selection of a concrete BOM Biz and of concrete routes for the subparts

of Biz form a process plan PPi for product Pi. A process plan PP for the given set of

products contains one process plan for each product, i.e., PP = {PP1, ..., PPr}. In the

remainder of this work we use the notion of a process plan and suppress that it is always

related to a given set of products. We refer to PPi as a local, and to PP as a global

process plan.

Page 33: Integrated Process Planning and Scheduling in Flexible Job

Chapter 2. Process planning and scheduling 14

The resulting problem is an FJS scheduling problem that takes into account different

BOMs and alternative routes. Using the α|β|γ notation, the problem is formulated as

FJm|ris, BOM, alter|TWT, (2.4)

where BOM and alter refer to alternative BOMs and routes, respectively. Problem

(2.4) is strongly NP-hard because it contains the strongly NP-hard scheduling problem

1||TWT (cf. Lawler 1977) as a special case. Hutchinson and Pflughoeft (1994) differen-

tiate between choosing an alternative set of operations (process flexibility), selecting the

sequence of operations (sequence flexibility), and choosing alternative machines (machine

flexibility). Problem (2.4) covers all these types of flexibility.

2.4.2 MIP Formulation

We propose a MIP model for problem (2.4). A related model is described by

Kim and Egbelu (1999) for Jm|alter|Cmax, where Cmax is the makespan. We start by

presenting indices, parameters, and decision variables of the model.

Indices:

i index associated with products.

s index associated with orders.

z index of the selected BOM.

l index of the selected route.

j index of a subpart within the selected BOM.

k operation index.

n index associated with a machine within a machine group.

Parameters:

Yizj1j2 =

1, if subpart j1 precedes subpart j2 in BOM z of Pi;

0, otherwise.

pMisjk

isjk processing time of operation k of subpart j in order Ois depending on

the underlying machine.

dis due date of order Ois.

wis weight of order Ois.

ris ready time of order Ois.

G a very large number.

Page 34: Integrated Process Planning and Scheduling in Flexible Job

Chapter 2. Process planning and scheduling 15

Decision variables:

Biz =

{1, if BOM z is selected for Pi;

0, otherwise.

Rizjl =

{1, if route l is selected for subpart j in BOM z for Pi;

0, otherwise.

Misjkn =

{1, if operation k of subpart j of order Ois is on its n-th machine;

0, otherwise.

Oi1i2s1s2j1j2k1k2 =

1,

if operation k1 of subpart j1 of order Oi1s1 precedes the

corresponding operation of order Oi2s2 on the same machine;

0, otherwise.

sisjk start time of operation k of subpart j in order Ois.

Cisjk completion time of operation k of subpart j in order Ois.

Cisj completion time of subpart j in order Ois.

Cis completion time of order Ois.

Tis tardiness of order Ois.

The MIP model is formulated as follows. The free indices cover their full range if not

specified otherwise.

min∑i,s

wisTis (2.5)

subject to:

∑z

Biz = 1, ∀i (2.6)

∑l

Rizjl = Biz, ∀i, z, j (2.7)

Cisj −G(1−Biz) ≤ Cis, ∀i, s, z, j (2.8)

Cisjk −G(1−Biz) ≤ Cisj , ∀i, s, z, j, k (2.9)

ris ≤ sisjk, ∀i, s, j, k (2.10)

Page 35: Integrated Process Planning and Scheduling in Flexible Job

Chapter 2. Process planning and scheduling 16

Yizj1j2Cisj1k1 −G(1−Biz) ≤ sisj2k2 , ∀i, s, z, j1, j2, k1, k2 (2.11)

Cisjk −G(1−Biz) ≤ sisj,k+1, ∀i, s, z, j, k (2.12)

Biz +Rizjl − 1 ≤∑n

Misjkn ≤ 1, ∀i, s, z, j, k (2.13)

sisjk + pMisjk

isjk Misjkn −G(2−Biz −Rizjl) ≤ Cisjk, ∀i, s, z, j, k, n (2.14)

Ci2s2j2k2 −G(7−Oi2i1s2s1j2j1k2k1 −Mi1s1j1k1n −Mi2s2j2k2n−

−Bi1z1 −Bi2z2 −Ri1z1j1l1 −Ri2z2j2l2) ≤ si1s1j1k1 ,

Oi2i1s2s1j2j1k2k1 +Oi1i2s1s2j1j2k1k2 = 1,

∀i1 6= i2, s1 6= s2, z1, z2, l1, l2, j1 6= j2, k1 6= k2, n

(2.15)

Cis − dis ≤ Tis, ∀i, s (2.16)

Biz, Rizjl,Misjkn, Oi1i2s1s2j1j2k1k2 ∈ {0, 1} ,

0 ≤ sisjk, 0 ≤ Cisjk, 0 ≤ Cisj , 0 ≤ Cis, 0 ≤ Tis,

∀i, s, j, k, n, i1 6= i2, s1 6= s2, z1, z2, l1, l2, j1 6= j2, k1 6= k2.

(2.17)

Expression (2.5) reflects the objective TWT which has to be minimized. Constraints

(2.6) guarantee that only one BOM can be selected for each product Pi. One of the

alternative routes is selected for each subpart according to constraints (2.7). Constraints

(2.8) describe the relationship between the completion time of any order and the

completion times of the corresponding subparts. A similar relationship between the

subparts and the operations is modelled by constraints (2.9). Note that constraints (2.9)

do not consider the selected routes since all routes of a specific subpart have the same

number of operations. Constraints (2.10) ensure that no operation of any order Ois can

be started earlier than the ready time of the order. The precedence constraints between

the subparts and between the operations of the subparts are modelled by constraints

(2.11) and (2.12), respectively. If subpart j2 immediately succeeds subpart j1 according

to BOM z of product Pi then an arbitrary operation of j2 can start only after the

completion of all operations of j1. The operations of each subpart within a selected

BOM have to be performed in a consecutive manner. Constraints (2.13) make sure

that for each operation only one machine from an appropriate machine group can be

selected for processing, whereas a machine can only be assigned if both the BOM and

Page 36: Integrated Process Planning and Scheduling in Flexible Job

Chapter 2. Process planning and scheduling 17

the corresponding route are selected. According to constraints (2.14), the completion

time of any operation is not smaller than the sum of its start time sisjk and its machine-

dependent processing time pMisjk

isjk . Constraints (2.15) model that only one operation can

be processed on a certain machine at a specific point of time. Here, we consider that

operations belong either to different orders or to different subparts, or that there are

two different operations within the same subpart of the same order. These constraints

are valid only for the selected BOMs, routes, and machines. Otherwise, the large value

of G and si1s1j1k1 ≥ 0 ensure the correctness of the model.

Constraints (2.16) relate the tardiness of an order to its completion time and due date.

Finally, constraints (2.17) represent the binary and non-negativity conditions. The MIP

model (2.5)–(2.17) can be solved by a commercial solver. Since problem (2.4) is NP-

hard, it is unlikely to find an optimal solution for medium- or large-size instances in

a reasonable amount of computing time. However, we can use the model to check the

correctness of our implementations based on the optimal solutions for small-size problem

instances.

Page 37: Integrated Process Planning and Scheduling in Flexible Job
Page 38: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3

Heuristics for Scheduling Jobs in

Flexible Job Shops

The problem of scheduling in FJS gains a lot of attention in scientific papers due to

its high importance in practice. In view of limited manufacturing resources and a fierce

competition between different companies, utilizing the resources in a best possible way

and fulfilling the desired performance criteria becomes challenging. We are interested

in an FJS scheduling problem with the TWT as performance measure. We know from

Chapter 2 that this problem is strongly NP-hard and it is unlikely to find an optimal

solution in a reasonable amount of computing time, especially for large-size problem

instances. Therefore, we consider different heuristic approaches to find high-quality

solutions with an acceptable computing effort. We begin this chapter with the problem

statement for FJS scheduling and its representation by a graph in Section 3.1. The

related work is discussed in Section 3.2. In Section 3.3, we describe different heuristics

for FJS scheduling. The main contribution of this chapter is an efficient LS algorithm

presented in Subsection 3.3.2. It is based on the DTO of the graph. We hybridize

the SBH with this LS approach and with a VNS approach in Subsection 3.3.3. The

proposed heuristics are compared computationally to the different dispatching rules from

Subsection 3.3.1. We describe our computational experiments in Section 3.4. The FJS

scheduling problem instances found in the literature are considered. We also propose

a set of new large-size problem instances and present the corresponding computational

results.

3.1 Problem and its Representation by a Graph

The production resources in the considered problem consist of groups of parallel

machines. The same assumptions on the resources apply throughout the complete work.

Their description is given in Subsection 2.4.1.

19

Page 39: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 20

We consider the following FJS scheduling problem. There are n different products

P1, ..., Pn. A number of ni orders Oi1, ..., Oini correspond to each product Pi, i = 1, ..., n.

Each order Ois has a weight wis, a release time ris, and a due date dis. We denote the

completion time of any order Ois by Cis. Similar to Ivens and Lambrecht (1996), we

assume that each order has a BOM, i.e., each order is composed of several subparts

where a route is assigned to each subpart. A route is a sequence of operations that have

to be performed in a consecutive manner to produce the subpart. Each operation can

be performed on one of the machines belonging to the machine group assigned to this

operation. The corresponding scheduling problem can be formulated applying the α|β|γnotation as

FJm|ris, BOM |TWT, (3.1)

where TWT =∑i,s

wis (Cis − dis)+ is used as the performance measure. By FJm we

denote an FJS consisting of m machine groups. The setup with BOMs is selected due

to our intention to apply this approach when solving the IPPS problem (cf. Chapter 4).

We consider a fixed local process plan for each product Pi, i = 1, 2, ... At this stage we

assume that such process plans are already selected. In order to model problem (3.1),

we apply disjunctive graphs (cf. Dauzere-Peres and Paulli 1997). Let G = (V,AC , AD)

denote such a graph with a set of vertices V , a set of conjunctive AC , and disjunctive AD

arcs. Every node v ∈ V represents an operation which has to be executed. We denote the

machine group and the machine assigned to operation v by M(v) and m(v), respectively.

The technological precedence constraints between the operations are described by the

conjunctive arcs in AC . These arcs can not be changed during the scheduling process.

Note that in contrast to conventional job shops, each node can have several predecessor

and successor nodes. The arcs in AD are used for modeling possible scheduling decisions.

These arcs connect any two nodes u, v ∈ V such that M(u) = M(v). It is worth noticing

that during the scheduling process the disjunctive arcs can be substituted by schedule-

based conjunctive arcs. Such arcs reflect particular scheduling decisions. Unlike the case

with one machine per machine group (cf. Pinedo and Singer 1999), such substitution

is done only if the considered two operations are processed on the same machine. In

the case with operations u, v ∈ V , such that M(u) = M(v), and m(u) 6= m(v), the

disjunctive arc connecting operations u and v is removed from graph G.

Every node v in graph G that models some operation of an order and has no

technological predecessors is connected with the start node o ∈ V which represents

an artificial operation, i.e., an operation which has a zero processing time, by an arc

(o, v). Since there are multiple orders Ois and each of them influences the considered

objective function directly, an artificial terminal node tis is assigned to each Ois (cf.

Pinedo and Singer 1999, Pinedo 2008). Every node v in graph G that corresponds to an

operation of order Ois and does not have any technological successors is connected with

Page 40: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 21

tis by a conjunctive arc (v, tis). A set of all schedule-based conjunctive arcs AS reflecting

the scheduling decisions is referred to as a selection (cf. Dauzere-Peres and Paulli 1997).

The length l(u, v) of any arc (u, v) ∈ AS as well as the length of any arc (u, v) ∈ AC is

defined as the processing time needed for executing operation u. In the general case, these

length are not known in advance, since the corresponding operations can be processed

by different machines which can imply different processing times. The length l(u, v),

∀(u, v) ∈ AC ∪AS , becomes defined as soon as a decision is made which machine has to

process operation u. In the case of identical machines, the arc lengths are known before

making the scheduling decisions. In the case of unrelated machines, the arc lengths can

be estimated as the average processing time of the operation over the machines. These

initial values are replaced as soon as the operations are assigned for processing to specific

machines.

An example of a disjunctive graph is depicted in Figure 3.1. Two orders are considered.

Each of them consists of three subparts. We can see that each of the operation pairs

(o12, o21), (o22, o24), and (o13, o25) can be processed on the same machine. This is

indicated by the corresponding disjunctive arcs that are represented by the dashed lines.

Figure 3.1: An example of the disjunctive graph for two orders O11 and O21.

A critical path of order Ois is the longest path from the start node o to the sink node tis.

An arc is critical if it belongs to such a path and connects nodes corresponding to the

operations of different subparts. In addition, an operation is critical if it is on a critical

path.

Based on the described model, the problem can be formulated as find a selection AS

and an assignment of operations to the machines such that the TWT is minimized.

Page 41: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 22

3.2 Discussion of Related Work

Scheduling problems for FJS were studied for the first time by Brucker and Schlie (1991).

They consider a job shop that consists of multi-purpose machines, i.e., a set of

machines is assigned to each operation. A polynomial algorithm is proposed for the

special case of two jobs. A tabu search (TS) approach for FJm||Cmax is described

by Hurink et al. (1994) where Cmax is the makespan of the jobs. Brandimarte (1993)

presents a hierarchical algorithm for the FJS scheduling problem where first the

operations are assigned to specific machines and a job shop scheduling problem is

solved after this decision. The makespan and the TWT objectives are considered

in this research. Dauzere-Peres and Paulli (1997) propose an integrated approach for

FJm||Cmax. A TS algorithm is described that works on an extended disjunctive graph

model. Dauzere-Peres et al. (1998) propose a TS algorithm for minimizing makespan

in multi-resource shops with resource flexibility. Non-linear routings and disjunctive

graph problem representation are considered. An improved version of this approach

is presented in (Dauzere-Peres and Pavageau 2003). Gambardella and Mastrolilli (1996)

propose effective neighborhood structures for the FJS scheduling problem with makespan

criterion. Hierarchical and integrated approaches for FJm||Cmax using SA and TS are

discussed by Fattahi et al. (2007). Zribi et al. (2007) consider a hierarchical approach for

FJS where the total workload of the machines, the load of the critical machine, and the

makespan are simultaneously considered as objectives. After the assignment decisions,

a hybrid GA is used to solve the job shop scheduling problem. An SBH-based approach

for FJm||Cmax is proposed by Chen et al. (2006). The parallel machine subproblems

are solved by list scheduling. An efficient SBH is proposed that reduces the number of

subproblems. This reduction is obtained by avoiding the critical machine group selection

and by using the reoptimization procedure only in a few iterations. An evolutionary algo-

rithm for FJm||Cmax that includes learning components is described by Ho et al. (2007).

An efficient GA for FJm||Cmax is presented by Pezzella et al. (2008). Gao et al. (2008)

propose a GA that is hybridized with variable neighborhood descent for an FJS

scheduling problem where the three objectives makespan, maximal machine workload,

and total workload are considered simultaneously. Pacino and Van Hentenryck (2011)

consider a large neighborhood search scheme for FJS scheduling problems with makespan

objective that is combined with adaptive randomized decomposition techniques.

Because of the relatively small number of publications for FJm||TWT , we discuss

these papers together with scheduling approaches for Jm||TWT , a special case of the

FJS scheduling problem. Pinedo and Singer (1999) extend the SBH to the problem

Jm||TWT . A large step random walk is proposed by Kreipl (2000) for this problem.

In this approach, the evaluation of moves requires computing of the longest paths for

each of the jobs. This makes the algorithm computationally costly. A GA hybridized with

iterative LS for Jm|rj |TWT is presented by Essafi et al. (2008). A GA for Jm||TWT

is described by Zhou et al. (2009). A decomposition technique different from the SBH

Page 42: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 23

is used by Zhang and Wu (2010b) to solve Jm||TWT . Bulbul (2011) proposes an SBH

variant where the conventional reoptimization scheme is replaced by a TS procedure.

Braune and Zapfel (2016) describe several variants of the SBH for Jm||TWT . The SBH

methods rely on a newly proposed heuristic for subproblem solving based on a recently

presented dominance rule. Partial schedules computed by the SBH are improved using an

LS or a combination of the classical reoptimization and the LS. The authors conclude

that the best solutions for Jm||TWT can be obtained if a combination of heuristic

and exact subproblem solving with pure LS-based reoptimization is applied. An LS

technique for conventional job shop scheduling problems with regular criteria is proposed

by Mati et al. (2011). The main idea is to search for better solutions by swapping

critical arcs. This guarantees the feasibility of solutions obtained by such moves. It

is possible to evaluate new solutions very efficiently using specific properties of the

graph. Recalculating the critical paths to assess the quality of moves can be avoided.

Garcia-Leon et al. (2015) propose an LS-based approach for FJS scheduling problems. It

is a neighborhood search-based LS. Efficient feasibility verification of the moves as well as

their fast evaluation is described. The evaluation procedure is based on the lower bound

estimates for the job completion times. Interesting is that the amount of possible moves

increases if their feasibility is verified based on the combination of the operation heads

and tails in the graph. Each job consists of a chain of operations. The considered objective

functions for FJS scheduling are Cmax, Tmax, and the total flow time. The TWT is

considered in the context of conventional job shop scheduling. Scholz-Reiter et al. (2013)

use a VNS scheme to select the sequence in which machines are considered in the

SBH approach for a dynamic job shop with TWT objective. Yang et al. (2000) discuss

decomposition approaches for FFm||TWT , where flexible flow shops with m stages

are abbreviated by FFm. Mason et al. (2002), Mason et al. (2005), Monch et al. (2007),

Monch and Zimmermann (2011) consider complex job shop scheduling problems moti-

vated by process restrictions found in semiconductor wafer fabrication facilities. Parallel

machines and the TWT objective are considered. The scheduling problems are solved

by the SBH. Monch and Drießel (2005) propose a distributed version of the SBH for

complex job shops with TWT objective. An FJS scheduling problem with the total

tardiness (TT) criterion is discussed by Scrich et al. (2004). A hierarchical approach

and a multi-start approach, both based on TS, are considered. Ho and Tay (2005) use

genetic programming to discover composite dispatching rules that can be used to solve

FJS scheduling problem with TT objective. Zribi et al. (2006) propose a hierarchical

approach combined with a GA to solve the FJS scheduling problem with the TT

objective. Overall, based on this review it seems that FJm||TWT and the problem (3.1)

are not discussed in the literature so far with the rare exception of Mason et al. (2002),

Mason et al. (2005), Monch and Drießel (2005), Monch et al. (2007), as well as the

work of Monch and Zimmermann (2011) where complex job shops are considered that

are typical for semiconductor manufacturing. Complex job shops are characterized

by re-entrant process flows; sequence-dependent setup times; batching machines, i.e.,

machines capable of processing several jobs at once; parallel machines as well as parallel

Page 43: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 24

batching machines referred to as p-batching; individual job release times and due dates;

incompatible job families. The TWT objective is used. The authors consider the SBH

approach to decompose the overall scheduling problem into scheduling subproblems for

single machine groups. Different subproblem solution methods like dispatching rules and

GAs are used. Simulation studies in a dynamic production environment are conducted.

The authors conclude that applying near-to-optimal subproblem solution procedures

in many cases can significantly improve the resulting TWT values for the overall

complex job shop. The LS-based approach proposed by Garcia-Leon et al. (2015) seems

to be related to the LS heuristic described in this chapter. The two algorithms, while

being different, allow for limiting the computational effort during the neighborhood-

based search. However, it is not possible to compare these two approaches directly for

FJm||TWT since the TWT objective is considered only for conventional job shops

by Garcia-Leon et al. (2015). Furthermore, the adaptation of this LS is required to deal

with non-linear BOMs. The comparison of the two approaches in view of solution quality

and computing time for the problem (3.1) is the subject of future research.

Next, we continue by discussing the availability of benchmark instances for FJS

scheduling problems. While there are publicly available problem instances for FJm||Cmax(cf. Behnke and Geiger 2012 for a survey on such benchmark instances) and Jm||TWT

(cf. Pinedo and Singer 1999, Essafi et al. 2008, Bulbul 2011, Mati et al. 2011), to the

best of our knowledge this is not the case for FJS scheduling problems with TT or

TWT objective. We are only aware of the five problem instances by Brandimarte (1993)

for FJm||TWT that are used in our computational experiments. Therefore, new problem

instances for FJm||TWT are proposed in this work.

3.3 Scheduling Algorithms

This section describes a number of algorithms which are developed to tackle the stated

FJS scheduling problem. Some scheduling approaches known from the literature are also

discussed. At the end of the section, we present computational results which allow for

comparing the effectiveness and efficiency of different scheduling approaches.

3.3.1 List Scheduling

List scheduling based on different dispatching rules is often applied in practice to

generate schedules within a relatively small amount of time (cf. Kemppainen 2005,

Pinedo 2008, Sobeyko and Monch 2016). The main idea of such an approach is that

different priorities are assigned to the jobs according to certain dispatching rules.

Afterwards, the jobs are scheduled on machines according to these priorities. Jobs with

higher priorities are scheduled first. There can be several ways of selecting the machine

for each job. Usually, a machine with the earliest availability is selected. In view of

Page 44: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 25

unrelated machines, we slightly modify the selection procedure. For operation u with

the highest priority, a machine able to finish it at the earliest point of time is selected.

An example illustration of such a situation is given in Figure 3.2.

(a) Selecting the first availablemachine m2 for operation u.

(b) Selecting machine m1 able to finishoperation u the earliest.

Figure 3.2: List scheduling in case of unrelated machines.

Here, two machines m1 and m2 are considered. Machines m1 and m2 become available

at time points t2 and t1 (t2 > t1), respectively. Assume that operation u is available

for processing at time t1 and that the processing time pu,m1 of u on m1 is significantly

smaller than its processing time pu,m2 on m2. Selecting the earliest available machine

m2 for processing leads to a significantly larger completion time t4 of u and to a worse

solution as depicted in Figure 3.2(a). However, delaying operation u for 4t = t2 − t1and selecting m1 for processing reduces the completion time t3 of u to a large extent as

shown in Figure 3.2(b). This approach can be more beneficial for the case of unrelated

machines (cf. Bank and Werner 2001). In fact, selecting a machine able to finish the

job at the earliest point of time is the same as selecting a machine with the earliest

availability in case of identical machines. Note that this approach takes into account

different speeds of the unrelated machines available in the FJS.

Some dispatching rules such as the earliest due date (EDD) rule define the job priorities

in advance before the scheduling is performed. Other rules like the apparent tardiness

cost (ATC) rule set priorities dynamically based on the current scheduling state.

We denote by v the current operation belonging to order Ois. The local ready time of this

operation is determined by rv := L(o, v), where L(o, v) denotes the longest path from the

start node o to node v in the disjunctive graph. The determination of local due dates dv is

important because appropriate local due dates can significantly improve performance of

dispatching rules (cf. Kanet and Hayya 1982). We denote the set of remaining operations

of the order succeeding the current operation by Θv.

An advantage of applying different dispatching rules is that a schedule can be computed

with a relatively small computational effort. At the same time, the performance of

dispatching rules can be poor in view of solution quality for many problem classes.

In our work, we refer to a number of dispatching rules which are most commonly cited

in the literature.

Page 45: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 26

1. FIFO - one of the most commonly applied dispatching rules. The earliest point

of time at which some operation v belonging to order Ois becomes available for

processing is defined as rv = max {ris, L(o, v)}. The operation with the smallest

index Iv := rv is selected as next for scheduling.

2. WFIFO - is a modification of FIFO which takes the weights of the orders into

consideration. In this case, an operation v with the smallest index Iv := rv/wv has

the highest scheduling priority.

3. ODD - operation due date is a modification of a well known EDD rule for the

case of job shop scheduling problems. The EDD rule selects an order Ois with

the smallest due date dis. However, each order contains a number of operations.

Thus, it is reasonable to assign a local due date dv to each operation v. According

to Kanet and Hayya (1982), such approach can improve performance significantly.

For the current operation v, its local due date is defined as

dv = dis − FF∑u∈Θv

pu,

where FF is an estimated problem-specific flow factor. It can be calculated

applying FIFO-scheduling (cf. Demirkol et al. 1997). The flow factor is used for

modeling of waiting times in the production system. It is calculated as the average

ratio over all orders between (Cis − ris) and the sum of the observed processing

times of the order. The pv value is the expected processing time of operation v. In

case of identical machines, we obtain pv ≡ pv.

4. WODD - a modification of the ODD rule which prioritizes operations with the

index values Iv := dv/wv. The operation with the smallest index is selected for

scheduling.

5. MOD - modified operation due date rule proposed by Baker and Kanet (1983).

For operation v of order Ois, its local modified due date is defined as

dv(t) = max

{dis − FF

∑u∈Θv

pu, t+ pv

},

where t is the point of time when the decision is made. Usually, t is the earliest point

of time when the underlying machine becomes available again and the operation is

available for scheduling. The operation with the smallest index value of Iv := dv(t)

is scheduled next.

6. WMOD - is an operation-based implementation of the WMDD-rule proposed by

Kanet and Li (2004). The next operation for scheduling is the one with the smallest

value of

Iv := dv(t) =1

wismax

{dis − FF

∑u∈Θv

pu, t+ pv

}.

Page 46: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 27

7. SPT - shortest processing time is a commonly applied dispatching rule (cf.

Brandimarte 1993). Here, operation v with the shortest processing time Iv := pv

is considered for scheduling.

8. WSPT - is a modification of SPT where an operation with the smallest index value

of Iv := pv/wis is selected next.

9. ATC - apparent tardiness cost rule designed for minimizing TWT. Depending on

the scheduling problem, many variations of this rule are described in the literature

(cf. Monch et al. 2005, Pfund et al. 2008). The priority index of operation v

belonging to order Ois is defined by the index

Iv(t) =wispv

exp

−dis − FF

∑u∈Θv

pu + (ris − t)+

κp

, (3.2)

where κ is a look-ahead parameter (cf. Vepsalainen and Morton 1987) and p is

the average remaining processing time of the jobs waiting in front of the machine

group (cf. Monch and Zimmermann 2004). We estimate the flow factor FF based

on the best schedule generated by any of the previously described dispatching

rules. Operation v with the highest value of Iv(t) is selected for scheduling. Ties

are broken according to a discrete uniform distribution. Note that we only have

due dates for the jobs. Therefore, we compute the slack on the job level instead

of the operation level in the priority index. The expected waiting time on the job

level is taken into account by the flow factor.

It is well known that dispatching rules can have different performance depending on the

properties of the underlying scheduling problem (cf. Kemppainen 2005, Pinedo 2008).

Thus, it is reasonable to consider a scheduler which combines the described dispatching

rules and finds the best schedule provided by any of them in a multi-start manner.

Moreover, we denote such a combined scheduler by CS. We refer to the first eight

dispatching rules as to simple dispatching rules (SDR). The ATC-rule does not belong to

this category since a search for the appropriate κ values is performed. Our computational

results confirm that applying CS can be beneficial in view of solution quality. At the

same time, different dispatching rules can be significantly faster than more advanced

search techniques and allow to find a solution with a small computational effort. Such a

solution can be used later as a starting point for other search methods.

3.3.2 Local Search Approach

Another popular type of methods which are usually applied to tackle job shop

scheduling problems are the LS-based methods (cf. Dauzere-Peres and Paulli 1997,

Yang et al. 2000, Mati et al. 2011). We are interested in LS heuristics which are suitable

Page 47: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 28

for solving scheduling problems with TWT. An efficient LS technique for scheduling job

shops with regular performance criteria is proposed by Mati et al. (2011). We refer to

this heuristic as MDL. The main idea of this heuristic is to search for better solutions

by swapping critical arcs in the solution graph G. Such swaps are called transitions.

These transitions guarantee that the newly generated solutions are always feasible, i.e.,

the solution graphs contain no cycles. By exploiting some properties of the DAG for

this problem, it is possible to evaluate the new solution efficiently. This indicates that

there is no necessity to recalculate the longest paths in the graph after each transition

(cf. Kreipl 2000). As a consequence, the computing time needed by the algorithm can

be reduced.

The MDL heuristic, however, is not appropriate in case of FJS where operations can

be processed on one of the alternative machines. We explain some properties of the

corresponding neighborhood structure NSMDL later in this work. Another problem is

an efficient evaluation of the newly generated solutions. The properties of the solution

graphs for job shop scheduling problems must not necessarily hold in case of FJS. To

evaluate the new solutions, it is possible to recalculate the longest paths L(o, tis) from

the start node o to each of the terminal nodes tis , i, s = 1, 2, ..., as it is suggested

in (Kreipl 2000, Essafi et al. 2008). However, this approach is very time-consuming. To

remedy the described problems, we propose a fast iterative LS heuristic which is able to

find high-quality solutions in a relatively short period of time.

Neighborhood structure

We propose an LS which is based on the neighborhood search concept and can be used

to improve an initial schedule. The main idea of the search technique is the permanent

moving of different operations in the already existing solution graph from one machine

to another, or changing the processing sequences of the operations on the machines in

order to find better solutions. In our work, we make use of the neighborhood structure

described by Dauzere-Peres and Paulli (1997). Let G be a given solution graph. As the

most general case, we consider operations s, t, j, k, and i which can be processed by the

same machine group, i.e., M(s) = M(t) = M(j) = M(k) = M(i). Let operations s, i,

and t be processed in the specified sequence on one of the machines of the machine group.

Operation j and k are processed on, possibly but not necessarily, other machine of the

machine group. Formally this can be written as m(s) = m(i) = m(t) and m(j) = m(k) in

the solution graph G. The corresponding illustration is given in Figure 3.3(a). We notice

that there are no schedule-based arcs between the operations which are processed on

different machines. At the same time, (s, i), (i, t), (j, k) ∈ AS . We try to move operation i

from its current machine to an alternative machine, or to change its processing sequence

in case of the same machine, and insert it between operations j and k. This implies that

the schedule-based arcs (s, i), (i, t), and (j, k) have to be removed from graph G. This

action also introduces arcs (s, t), (j, i) and (i, k) resulting in a new solution graph G′.

Page 48: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 29

(a) Graph G before the transition. (b) Graph G′ after the transition.

Figure 3.3: Transitions by the neighborhood structure NSLS .

We refer to such a modification of the solution graph as a transition. The result of the

described transition is illustrated in Figure 3.3(b). We define a neighborhood structure

NSLS as a mapping

NSLS : SG → 2SG , (3.3)

where SG is the set of all feasible solutions represented by the corresponding solution

graphs. NSLS(G) is a neighborhood of graph G. We assume that for the considered

scheduling problem (3.1) the neighborhood NSLS(G) consists of all solution graphs

which can be obtained from G by performing exactly one transition. For the situation

described above and depicted in Figure 3.3, we can formally write

G′ ∈ NSLS(G). (3.4)

It is clear that performing random transitions over the solution graph can influence its

feasibility. Next, we define when a solution of the problem (3.1) is feasible.

Definition 3.1. A solution of the scheduling problem (3.1) is feasible if it can be

represented by a DAG.

By performing a transition, we transform one solution into another one. It is desired

that the new solution is feasible as well. Hence, we define when a transition is feasible.

Definition 3.2. A transition is feasible if a newly generated solution graph contains no

cycles.

These definitions are used later on when we establish different properties of the

considered neighborhood structures. We formulate sufficient conditions which ensure

the feasibility of a newly generated graph after performing a transition in Theorem 3.1.

Theorem 3.1. No cycle is created in the solution graph by moving operation i and

sequencing it between operations j and k, provided the following conditions hold

Page 49: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 30

1. rj < mino∈fr(i)

{ro + po}, and

2. rk + pk > maxo∈pr(i)

{ro}.

Here, the sets pr(i) and fr(i) contain all routing predecessors and successors of operation

i, respectively. In addition, we assume that j /∈ fr(i) and k /∈ pr(i).

Proof. Assume that moving operation i and sequencing it between j and k creates a

cycle in the solution graph G′. This implies that either there exists a path from i to j

in graph G or there exists a path from k to i in G before the transition. Let there exist

a path from i to j in G. This implies that there exists at least one operation i∗ ∈ fr(i)such that i∗ 6= j and rj ≥ ri∗ + pi∗ . The last inequality contradicts the first condition of

the theorem. In the case there exists a path from k to i in G, there must exist i∗ ∈ pr(i)such that i∗ 6= k and rk + pk ≤ ri∗ which is a contradiction to the second condition of

the theorem.

We note that Theorem 3.1 is a generalization (cf. Dauzere-Peres and Pavageau 2003)

of the corresponding result provided in (Dauzere-Peres and Paulli 1997) to the case of

non-linear assembly structures that are of great interest (cf. Ivens and Lambrecht 1996).

An important characteristic of any neighborhood structure is its connectivity. It defines

whether an optimal problem solution can be reached starting from any initial solution.

We define this property as follows.

Definition 3.3. A neighborhood structure NS is connected if

∀G0 ∈ SG|∃Gi ∈ NS(Gi−1), i = 1, 2, ..., n, n ∈ IN, n < +∞

such that Gn represents an optimal solution.

We continue by establishing the connectivity property of NSLS . In Lemma 3.1, we

prove that it is possible to reassign the operations to the alternative machines from

the corresponding machine groups using the described transitions.

Lemma 3.1. There exists a feasible transition which allows assigning an operation i to

any of the alternative machines from M(i).

Proof. To prove this lemma, it is enough to show that there exists a transition which

satisfies both inequalities in Theorem 3.1. We consider some feasible solution graph G.

Let some arc (j, k) correspond to two operations j and k such that m(j) = m(k) and

k is processed immediately after j. We assume that for any such arc at least one of the

conditions is violated.

Page 50: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 31

• Let rj ≥ mino∈fr(i)

{ro + po}. Then in view of the inequalities

rk + pk > rk ≥ rj + pj > rj

we state that

rk + pk > mino∈fr(i)

{ro + po} > mino∈fr(i)

{ro} ≥ maxo∈pr(i)

{ro + po} > maxo∈pr(i)

{ro} .

Thus, Condition 2 in Theorem 3.1 is satisfied if Condition 1 does not hold.

• Let rk + pk ≤ maxo∈pr(i)

{ro}. We consider the following inequalities

rj < rj + pj ≤ rk < rk + pk ≤ maxo∈pr(i)

{ro} ≤ maxo∈pr(i)

{ro + po} < mino∈fr(i)

{ro + po} .

This implies that Condition 1 of Theorem 3.1 is satisfied when Condition 2 is

violated.

At least one of the conditions in Theorem 3.1 always holds. Obviously, if j satisfies

Condition 1 then all operations preceding it on the same machine must satisfy this

condition as well. If operation k satisfies Condition 2 then all its successors on the same

machine must satisfy this condition. Assume j to be the last operation in the processing

sequence on the machine such that it satisfies Condition 1, i.e., rj ≤ mino∈fr(i)

{ro + po}

and rk ≥ mino∈fr(i)

{ro + po}, but not the last one on the machine. If Condition 2 does not

hold for k, i.e., rk + pk ≤ maxo∈pr(i)

{ro}, then

rk < rk + pk ≤ maxo∈pr(i)

{ro} < mino∈fr(i)

{ro} ≤ rk.

The last inequalities imply that k must satisfy Condition 2, otherwise it would contradict

to the assumption imposed on j. If j is the last operation processed on the machine and

satisfies Condition 1 then moving operation i directly after operation j does not create

cycles. Condition 2 is not considered in this case. Thus, if at least one operation j satisfies

condition 1 then there exists a feasible transition which places i directly after j.

By analogy, a similar conclusion can be made for at least one operation k satisfying

Condition 2. If there is only one operation j on the machine and both conditions do not

hold, i.e., rj ≥ mino∈fr(i)

{ro + po} and rj + pj ≤ maxo∈pr(i)

{ro}, then

rj < rj + pj ≤ maxo∈pr(i)

{ro} < maxo∈pr(i)

{ro + po} < maxo∈fr(i)

{ro + po} ≤ rj .

The last inequality implies that at least one of the conditions must hold. Finally, moving

i on an empty machine is always a feasible transition since no arcs are added to the graph.

This proves the statement.

Page 51: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 32

Again, we notice that Lemma 3.1 is a generalization of the corresponding result shown

by Dauzere-Peres and Paulli (1997).

Mati et al. (2011) propose a neighborhood structure NSMDL which is based on reversing

critical arcs in the solution graph. A very important result of that work is the connectivity

ofNSMDL for any regular optimization criterion, including TWT. Lemma 3.2 establishes

the relation between NSMDL and NSLS

Lemma 3.2. Neighborhood structure NSLS contains NSMDL, i.e., reversing a critical

arc is always a transition in NSLS.

Proof. Let operations i, j, and k be sequenced one after another for processing on the

same machine. We assume that arc (i, j) is critical. Reversing this arc is equivalent to

placing i immediately after j and before k. Since (i, j) is critical, it holds

rj = ri + pi ≤ mino∈fr(i)

{ro} < mino∈fr(i)

{ro + po} .

Thus, Condition 1 in Theorem 3.1 is satisfied. On the other hand,

rk + pk > rj = ri + pi > ri ≥ maxo∈pr(i)

{ro + po} > maxo∈pr(i)

{ro} .

This implies the second condition of Theorem 3.1. Since both of the conditions hold,

any transition by NSMDL is also a transition by NSLS .

Finally, we are ready to prove the main result regarding the connectivity of NSLS .

Theorem 3.2. Neighborhood structure NSLS is connected for any regular criterion.

Proof. In an optimal solution, each operation v is assigned to some machine m(v).

Theorem 3.1 guarantees that any operation v can be always moved to any of its

alternative machine from M(v) by a corresponding transition using NSLS . Consequently,

an optimal assignment of the operations to the machines can be achieved. For such

assignment, an optimal sequencing of operations on their machines can be obtained

by performing transitions of NSMDL. Since NSLS contains NSMDL, both the optimal

assignment and the optimal sequencing of operations can be achieved after a finite

number of steps of type Gi ∈ NSLS(Gi−1), i = 1, 2, ..., n, n ∈ IN, n < +∞, for any initial

solution graph G0 ∈ SG and for any regular criterion.

Next, we apply NSLS based on its feasibility and connectivity properties.

Page 52: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 33

DTO

After performing a transition on some solution graph G, a new solution graph G′ ∈NSLS(G) is created. The completion times of some operations in G′, including the

terminal nodes tis, i, s = 1, 2, ..., can change. As an example, consider the solution graph

depicted in Figure 3.4.

Figure 3.4: An example of the solution graph G before a transition.

We assume that the schedule-based arc (o21, o12) is reversed by a transition. The resulting

solution graph is depicted in Figure 3.5.

Figure 3.5: An example of the solution graph G′ after a transition.

The colored operations in Figure 3.5 are the ones which could change their completion

times after the transition. For instance, the completion time of O21 increases from 13

to 16. In order to evaluate the quality of the solution represented by graph G′, the

completion times of all terminal nodes tis have to be recalculated. One way to do this is

to recalculate the longest paths from the start node o to each one of the terminal nodes.

This procedure is quite time-consuming (cf. Kreipl 2000). Our aim is to develop a method

which allows for evaluating new solutions more efficiently based on the information

Page 53: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 34

available before and after the move. The importance of such procedures for an efficient

implementation of neighborhood search schemes is pointed out by Katriel et al. (2005).

We propose an approach which makes use of the topological ordering of the solution

graph. First, we give a definition of the topological ordering.

Definition 3.4. An ordering of VG, i.e., a bijective mapping σ : V → {1, ..., |V |}, is

called topological if σ(u) < σ(v) for any arc (u, v) ∈ AC ∪AS .

For the sake of simplicity, we consider the following transition on the solution graph G.

Some arc (vi, vj) ∈ AS is removed from G, thus forming graph G. Afterwards, a new arc

(vi, vk) is introduced resulting into graph G′. We assume that no cycles are created after

introducing arc (vi, vk) into G. We denote the topological orderings of G and G′ by σ

and σ′, respectively.

Lemma 3.3. The topological ordering σ := σ is feasible for graph G obtained from G

after removing an arbitrary arc (vi, vj) ∈ AS.

Proof. Removing an arbitrary arc (vi, vj) ∈ AS does not affect the feasibility of σ since

∀(vl, vm) ∈ AS \ {(vi, vj)} holds σ(vl) < σ(vm). This proves that σ := σ is feasible.

Two cases are possible after introducing an arc (vi, vk) into G and forming graph G′. In

the first case, it holds σ(vi) < σ(vk). Then it is clear that σ′ := σ is a feasible topological

ordering for G′. In the second case, if σ(vi) > σ(vk) then σ′ := σ is not feasible for G′.

We discuss this case more detailed. We observe that

∀vl, vm ∈ V with σ(vl) ≤ σ(vk), σ(vm) ≤ σ(vk), (vl, vm) ∈ AS

holds

σ(vl) < σ(vm).

In addition,

∀vl, vm ∈ V with σ(vl) ≥ σ(vi), σ(vm) ≥ σ(vi), (vl, vm) ∈ A′S

holds

σ(vl) < σ(vm).

This implies that only a part of σ consisting of all vm ∈ V with σ(vk) < σ(vm) < σ(vi)

is invalid for G′. An example of the affected area is depicted in Figure 3.6.

Pearce and Kelly (2006) describe an efficient DTO algorithm for computing a correct

topological ordering σ′ based on σ. The main idea is to reuse the correct parts of σ

and to update only the affected or invalid area (see Figure 3.6). We concentrate on

Page 54: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 35

Figure 3.6: Affected area in σ.

the part of σ which has to be updated, i.e., on all nodes vm for which the inequalities

σ(vk) < σ(vm) < σ(vi) hold. We introduce a sequence of nodes

δik := {v ∈ V |σ(vk) < σ(v) < σ(vi),∃(vk v)} ∪ {vk}

such that ∀u, v ∈ δik the position of u is less than the position of v only if σ(u) < σ(v).

We denote a path from u to v in graph G by (u v). In other words, δik is the part of

σ which contains vk and those nodes which are reachable from vk in G, and belong to

the affected area. In Figure 3.6, such nodes are vl, ..., vm.

Lemma 3.4. Moving δik immediately after vi creates a feasible topological ordering σ′

for graph G′.

Proof. We consider graph G and its topological ordering σ which is feasible. Graph G′ is

obtained from G by introducing an arc (vi, vk). Placing δik immediately after vi implies

that σ′(vi) < σ′(vk). Moreover, for any v which is not reachable from vk and belongs

to the affected area, there exists no arc (vk, v) ∈ A′S ∪A′C . Thus, the sequence of nodes

corresponding to σ′ is by definition topologically ordered for graph G′.

In our example, the corrected topological ordering σ′ is illustrated in Figure 3.7.

Figure 3.7: Corrected topological ordering σ′ after DTO.

Lemma 3.4 describes a constructive way of obtaining σ′ from σ. For this purpose, δik

has to be found. Algorithm 3.1 determines δik based on σ.

Algorithm 3.1 Generation of δik.

1: Set δik ← {vk}2: for j = σ(vk) + 1, ..., σ(vi)− 1 do3: Select v such that σ(v) = j4: Append δik with v only if ∃(u, v) ∈ AS ∪AC |u ∈ δik5: end for

Page 55: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 36

Note that it is possible to select the node v in Step 3 of Algorithm 3.1 since σ is a

bijective mappping. After removing δik from σ and placing it immediately after vi, a

feasible topological ordering σ′ for the solution graph G′ is generated.

It is clear that the number of required operations performed by Algorithm 3.1 is

O((σ(vi)− σ(vk)− 1)d−

),

where

d− = maxj=σ(vk)+1,...,σ(vi)−1

{d−v |σ(v) = j

}.

The worst case time complexity of this algorithm is

O

(|V |max

v∈Vd−v

).

In practice, the value of d− is usually small. Thus, applying DTO can be significantly

more efficient while updating σ′ than generating it from the scratch.

A similar approach can be applied in case of an arbitrary transition depicted in

Figure 3.3, since such a transition is equivalent to removing arcs (s, i), (i, t), and (j, k)

with a subsequent insertion of arcs (s, t), (j, i), and (i, k). Only the insertion of (j, i) and

(i, k) can potentially invalidate σ. If σ(j) < σ(i) < σ(k) then σ′ = σ. If σ(i) < σ(j) or

σ(k) < σ(i) then the corresponding affected area of σ is between the nodes i and j, or

between k and i, respectively.

Efficient evaluation

Based on the updated topological ordering σ′, it is possible to efficiently evaluate the

corresponding solution represented by graph G′. We denote the completion times of

any operation v in graphs G and G′ by Cv and C ′v, respectively. The heads of the

operation v before and after the transition are defined by rv and r′v, accordingly. It

appears that not all heads and completion times change after a transition is performed.

Before investigating further, a lemma is formulated.

Let ΘGV

denote a set of all nodes v ∈ V such that ∀v ∈ ΘGV∃u ∈ V ∧∃(u v) in graph

G, where V ⊆ V . In other words, ΘGV

represents the set of nodes which are reachable

from at least one node from the set V in graph G.

Lemma 3.5. After any transition in the form G′ ∈ NSLS(G), it holds ΘG{i,t,k} ⊆ ΘG′

{i,t}.

Proof. It is obvious that ΘG{i} \ΘG

{t} ⊆ ΘG′

{i}, ΘG{t} ⊆ ΘG′

{t}, and ΘG{k} \ΘG

{i} ⊆ ΘG′

{k}. Since

ΘG′

{k} ⊆ ΘG′

{i}

Page 56: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 37

we have

ΘG{k} \ΘG

{i} ⊆ ΘG′

{i}.

Due to

ΘG{i} \ΘG

{t} ⊆ ΘG′

{i}

and

ΘG{t} ⊆ ΘG′

{t}

we obtain

ΘG{i} \ΘG

{t} ∪ΘG{t} ⊆ ΘG′

{i} ∪ΘG′

{t},

i.e.,

ΘG{i,t} ⊆ ΘG′

{i,t}.

Based on the last two statements, we conclude that

ΘG{k} \ΘG

{i} ∪ΘG{i,t} ⊆ ΘG′

{i} ∪ΘG′

{i,t},

i.e.,

ΘG{i,k,t} ⊆ ΘG′

{i,t},

thus proving the lemma’s statement.

The following theorem gives an answer which operations are not affected by the

transition.

Theorem 3.3. For any operation v ∈ V , it holds r = r′ and C ′v = Cv if

σ′(v) < min{σ′(i), σ′(t)

}.

Proof. We notice that ∀v ∈ V with σ′(v) < min {σ′(i), σ′(t)} it holds v /∈ ΘG′i,t , otherwise

v ∈ ΘG′i,t would mean that σ′(v) > min {σ′(i), σ′(t)}. Based on Lemma 3.5, we conclude

that v /∈ ΘGi,t,k. Therefore, any critical path from o to v is not affected by the transition.

This implies that r′v = rv. Since node i represents the only operation which can change

its processing time, we also conclude that C ′v = Cv.

Theorem 3.3 describes a practical way of updating the completion times of the relevant

operations in graph G′, including the terminal nodes tis, after a transition. It assumes

updating the completion times only of those operations which satisfy its condition.

Algorithm 3.2 can be used to perform such an update. Note that the completion times

of the terminal nodes are evaluated exactly.

The worst case time complexity of this algorithm is O (|V | · d−). Assuming that d− is

bounded from above, the time complexity would be O (|V |). Thus, the algorithm allows

for efficient evaluation of the new solution represented by the graph G′.

Page 57: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 38

Algorithm 3.2 Updating heads and completion times of the operations.

1: Set pos← min {σ′(i), σ′(t)}.2: while pos < |V | do3: Select v such that σ′(v) = pos4: Set r′v ← max

u∈P ′(v){C ′v} where P ′(v) is the set of immediate predecessors of v in G′

5: Set C ′v ← r′v + p′v, where p′v = pv,∀v 6= i6: Set pos← pos+ 17: end while

Algorithm 3.2 can be further improved if only nodes from ΘG′i,t are considered. However,

this requires an additional computational effort for identifying whether a node belongs

to ΘG′i,t or not.

Let CP (v) be a set of immediate predecessors of node v which are situated on the

critical path from o to v. If u ∈ CP (v) then Cu = Cv − pv. A small modification of

Algorithm 3.2 allows identifying CP (v) for any node v. We consider a terminal node tis

and CP (tis). For some fixed v ∈ CP (tis), we determine CP (v). This actions is repeated

for the predecessor nodes until we reach the node o. Clearly, CP (o) = ∅. Thus, a critical

path to tis can be identified with the time complexity of O (|V |). Identifying critical

paths in the graph is of a great practical interest since it is not possible to improve a

solution when performing transitions only for non-critical nodes. Indeed, if a node is

not critical then performing a transition for it cannot decrease the length of any critical

path in the graph. This can only lead to a solution of a worse quality.

We notice that the neighborhood structures presented by Garcia-Leon et al. (2015) also

rely on the critical operations in the disjunctive graph, while the moves are evaluated

based on the lower bound estimates of the completion times for the terminal nodes.

Iterative LS algorithm

We describe an iterative LS algorithm which is based on applying the described

neighborhood structure NSLS . The iterative process is similar to the heuristic proposed

by Mati et al. (2011). The solution procedure starts by generating an initial solution

graph G0. We notice that the solution can be a partial solution as well. It is important

during the reoptimization procedure in the SBH since not all machines or machine groups

are scheduled at once. The initial solution can be generated at a low computational

cost applying any of the described dispatching rules, such as CS, combined with list

scheduling described in Subsection 3.3.1. We denote the current solution found by the

search algorithm as G. The corresponding TWT value is denoted by TWTG. The best

found solution so far is denoted by G∗. Note, it is also possible to consider other objective

functions instead of TWT, such as makespan Cmax due to Theorem 3.2. Algorithm 3.3

describes the LS algorithm considered in this work.

Page 58: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 39

Algorithm 3.3 Iterative LS algorithm.

1: Set G← G0, G∗ ← G0, TWTG ← TWTG0 , and TWTG∗ ← TWTG0

2: repeat3: Generate a new solution graph G′ ∈ NSLS(G)4: Evaluate the new solution G′ to compute TWTG′

5: Move or not:6: if TWTG′ < TWTG then7: Set G∗ ← G′, TWTG∗ ← TWTG′ , G← G′, TWTG ← TWTG′

8: else9: G← G′, TWTG ← TWTG′ with a given probability γ

10: end if11: until Stopping condition is met

The probability γ in Algorithm 3.3 is defined as

γ = exp

{−TWTG′ − TWTG

T

}. (3.5)

Such a definition of γ turns Algorithm 3.3 into an SA scheme with the cooling parameter

T (cf. Pinedo 2008). The value of the parameter T has to decrease as the number of

iterations of the algorithm increases. There are several methods of choosing T . One of

them is based on the number of iterations. Let mu and cu define the maximum number of

iterations for the algorithm and the number of already performed iterations, respectively.

Then we select T as

T = 1− cu

mu. (3.6)

It is obvious that the more operations are performed by the algorithm, the closer to zero

the value of T is. Another option of choosing T is based on the allowed computing time.

For instance, mu can denote the maximum number of time units for the algorithm and

cu - the number of time units already passed since the algorithm’s start.

At the beginning of the iterative process, there is a high probability that worse solutions

are accepted. However, as the search algorithm progresses, this probability decreases

and goes to zero at the end of the procedure. Such an approach allows the algorithm to

escape local optima more effectively. This might lead to a higher quality of the found

solutions.

The algorithm stops as soon as a stopping criterion is met. Several stopping criteria are

possible

1. Stop after a predefined number of iterations.

2. Stop after a certain number of non-improving iterations.

3. Stop after a given amount of computing time.

Page 59: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 40

Within this research, we stop the algorithm after a predefined number of iterations,

otherwise we state this explicitly.

Selecting operations for transitions

Selecting operations for transitions by NSLS during the search procedure can have a

significant influence on the algorithm’s performance. As observed by Mati et al. (2011)

in the case of the NSMDL neighborhood structure, it is not possible to improve the

optimization criterion by inverting an arc which is only critical for an order not

contributing to the optimization criterion. We can extend this observation for the

case of transitions by NSLS . It is not possible to improve the optimization criterion

if a transition is performed for an operation which is only critical for an order not

contributing to the criterion. Indeed, performing a transition on such an operation can

not decrease the tardiness of an early order, as well as the tardiness of any other order.

Hence, we select only operations of tardy orders for transitions. Such a strategy also

decreases the size of the neighborhood (cf. Kreipl 2000).

Preferring critical operations for transitions can improve the performance of the

search algorithm significantly. However, identifying such operations can still be a

computationally costly process since the critical paths can change after each successful

transition. To reduce the amount of required computations, we generate a set of critical

nodes Vcr ⊆ V each time a predefined number of iterations is performed. Otherwise,

this set of nodes is left unchanged. Such operations are treated as pseudo-critical during

the following iterations of the search procedure. We update the set Vcr after a certain

number of iterations denoted by iter∗.

Diversification

Although selecting the probability γ according to equation (3.5) already contains a

built-in mechanism of escaping local optima, this effect can be increased by using

some additional techniques. Mati et al. (2011) describe a diversification phase in their

search algorithm which is appropriate for this purpose. During the diversification phase,

transitions on randomly selected operations belonging to the orders contributing to

the optimization criterion are performed. We denote the maximum amount of such

transitions by λ. Both improving and non-improving transitions are considered. If a

better solution is found, the diversification terminates. The diversification process starts

as soon as the solution is not improved in the last η iterations. The values selected for

λ and η are defined in Section 3.4.3.

Page 60: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 41

3.3.3 Decomposition Techniques

It is well known that job shop scheduling problems are NP-hard (cf. Lenstra et al. 1977,

Pinedo 2008). In order to be able to solve such problems in a reasonable amount of time,

various heuristics are developed. Among others, different decomposition approaches

gain a lot of attention (cf. Pinedo and Singer 1999, Pinedo 2008, Zhang and Wu 2010a).

One of the most commonly used decomposition heuristics is the SBH proposed by

Adams et al. (1988). This procedure and its modifications are applied for different

configurations of job shops, including FJS, and also for different optimization criteria

(cf. Ivens and Lambrecht 1996, Pinedo and Singer 1999, Mason et al. 2002, as well as

Chen et al. 2006, and Bulbul 2011). Further, we consider one modification of the SBH

which can be applied for solving FJS scheduling problems with the TWT objective. In

addition, this heuristic can be used for benchmarking against other solution approaches.

Basic structure of SBH

We consider a set of all available machine groups which represent the production

resources

M ={M1, ...,M|M |

}.

Let MS denote a set of machine groups which are scheduled. This means that a

scheduling decision for each operation requiring any machine from MS is already made.

Algorithm 3.4 describes the basic structure of the SBH.

Algorithm 3.4 Basic structure of SBH.

1: Set MS ← ∅2: repeat3: Select a critical machine group MC ∈M \MS

4: Solve the SSP for MC

5: Introduce the corresponding solution for MC into the solution graph G6: (Optional) Reschedule operations of the machine groups from MS

7: Set MS ←MS ∪ {MC}8: until MS = M

The main idea of the SBH is to decompose the scheduling problem based on the

machine groups and to schedule them one by one. The most critical machine groups (the

bottlenecks) are scheduled at the beginning. Further, we give a more detailed description

of different steps of the algorithm.

Subproblem criticality

The concept of critical machines or machine groups is central to the SBH. Different

approaches for defining and estimating the criticality cr(Mi) of some SSP for machine

Page 61: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 42

group Mi during the algorithm can be found in the relevant literature such as the

works of Aytug et al. 2003, and Monch and Zimmermann 2007. In the conventional

SBH, subproblem criticality is measured dynamically, i.e., the criticality for any fixed

subproblem can change depending on the current iteration of the algorithm. The most

common approach to measure the criticality is to solve the corresponding SSP. After

solving the subproblems for any Mi ∈ M \MS and measuring their criticality values

cr(Mi), one can discover the most critical machine group Mcr with

cr(Mcr) = maxMi∈M\MS

{cr(Mi)} .

The schedule corresponding to Mcr has to be reflected in the solution graph G. According

to Yang et al. (2000) and Aytug et al. (2003), criticality of different machine groups can

be defined also statically. In this case, the values of cr(Mi), ∀Mi ∈ M are defined in

advance and do not change at different iterations of the SBH. This is beneficial, since

there is no need to solve a set of SSPs for every machine group from M \ MS in

every iteration of the algorithm. Only one SSP has to be solved for Mcr. According

to Aytug et al. (2003), such an approach can reduce the solution time significantly with

a negligible loss in solution quality. Demirkol et al. (1997) discover that the bottleneck

selection criterion has no significant effect on the solution quality as long as a sensible

criticality measure is used. Following Aytug et al. (2003) and Yang et al. (2000), we

consider a static criticality measure given by

cr(Mi) =

∑v∈O(Mi)

pv

|Mi|, (3.7)

where O(Mi) is the set of operations for processing on |Mi| machines of the machine

group Mi. This criterion is reasonable in view of the expectation that the machine group

with the highest expected work load of its machines causes the largest waiting times in

the production system. Thus, such machine group can be considered as a bottleneck.

This approach significantly reduces the computing time, especially when more time-

consuming subproblem solution techniques are applied. Instead of |M |(|M | + 1)/2

subproblems only |M | subproblems have to be solved if the reoptimization option is not

used. The loss in solution quality, if any, is modest as can be seen from the computational

results in (Monch and Zimmermann 2007).

Reoptimization

Performing reoptimization within the SBH is an important procedure which allows for

improving the quality of the found solutions significantly (cf. Demirkol et al. 1997).

If this step is omitted then the performance of SBH can degrade considerably (cf.

Bulbul 2011), and the SBH procedure can even be outperformed by dispatching rules

(cf. Demirkol et al. 1997). Basically, there are two main approaches to perform the

Page 62: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 43

reoptimization step within the SBH. The conventional reoptimization procedure of the

SBH is described by Algorithm 3.5 (cf. Pinedo 2008).

Algorithm 3.5 Reoptimization procedure in SBH.

1: for all Mi ∈MS \ {Mcr} do2: Remove the selection AMi

S corresponding to Mi from G3: Formulate and solve a new SSP for Mi

4: Introduce the new selection AMiS into G

5: end for

The reoptimization procedure continues until no further improvements of the solution

quality occurs. Usually, the number of reoptimization cycles is limited in order to avoid

high computing times (cf. Demirkol et al. 1997).

According to Balas and Vazacopoulos (1998), the reoptimization step can be considered

as some LS procedure. Applying LS in the reoptimization step of SBH can be beneficial

since the conventional reoptimization is decomposition-based and the machine groups

are usually rescheduled in some fixed order. Moreover, the conventional reoptimization

can be computationally intensive, involving many SSPs (cf. Demirkol et al. 1997). A

variation of SBH, where the conventional reoptimization is replaced by a TS algorithm, is

presented by Bulbul (2011). In our study, we apply the iterative LS procedure described

in Subsection 3.3.2. The TWT of the partial schedule corresponding to the already

scheduled machine groups is considered. These machine groups are rescheduled rather

simultaneously since the LS scheme operates on all scheduled operations which can

belong to any scheduled machine group.

Subproblem solution methods

The performance of the SBH can be significantly influenced by the quality of its

subproblem solutions (cf. Demirkol et al. 1997). It is also important to formulate the

subproblems appropriately depending on the optimization criterion.

In our study, we consider two types of machine environments:

• identical machines Pm and

• unrelated machines Rm.

These machine environments are interesting not only from a practical but also from a

theoretical point of view. This will be confirmed later by our computational results.

An appropriate local optimization criterion has to be considered while formulating

each SSP (cf. Pinedo and Singer 1999, Chen et al. 2006, Pinedo 2008). Since the global

optimization criterion is the TWT, we refer to the objective function

TWT ′ =∑i,s

wis maxv∈O(Mcur)

{T isv}

(3.8)

Page 63: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 44

proposed by Pinedo and Singer (1999). Mcur denotes the machine group for which the

current SSP is formulated. The value of T isv is defined as

T isv = max{Cv − disv , 0

}and represents the tardiness of operation v in view of the terminal node tis. By disv we

denote the local due date of operation v in view of the terminal node tis. Tisv has the

following meaning. If Cv exceeds disv then the corresponding order Ois will be delayed

for at least T isv time units after introducing the subproblem’s solution into the solution

graph G. According to Equation (3.8), the resulting value of TWT ′ is an estimate of the

increase of the objective function, provided the machine group Mcur is considered as a

bottleneck and is scheduled next. Depending on the machining environment, the SSPs

are formulated as

Pm|rv, disv , prec|TWT ′

and

Rm|rv, disv , prec|TWT ′,

where prec denotes the delayed precedence constraints (cf. Pinedo 2008).

To solve the stated SSPs, we apply the modified ATC dispatching rule with the priority

index

Iv(t) =∑i,s

wispv

exp

{−(disv − pv + (rv − t)+

)+κp

}

proposed by Pinedo and Singer (1999).

Note that in case of unrelated machines the quality of the obtained solutions can

deteriorate since the exact processing times of the operations are not known in advance.

This can lead to underestimated or overestimated local due dates and consequently, to

inappropriate scheduling decisions. Therefore, the obtained solutions are improved by

applying more advanced search techniques.

Let TWT ∗ denote the TWT of all orders in the partial schedule produced in the current

step of the SBH. We notice that TWT ′ represents the lower bound of the possible increase

of TWT ∗ after scheduling the current machine group Mcur. During the improvement

phase of the obtained solutions, we consider the following scheduling problems:

Pm|rv, prec|TWT ∗

and

Rm|rv, prec|TWT ∗.

Applying TWT ∗ instead of TWT ′ is beneficial since there is no need to estimate the

local due dates, and the value of TWT ∗ allows for better estimation of the TWT.

Page 64: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 45

We consider two methods to solve the stated subproblems. The first method presumes

applying the described iterative LS algorithm. We call the resulting SBH variant

SBH-LS. The second approach, abbreviated by SBH-VNS, is based on the VNS scheme

for Pm|rj , sjk, prec|TWT proposed by Driessel and Monch (2011) where sjk refers to

sequence-dependent setup times. We adopt this approach to solve the subproblems

during the SBH.

By posv we denote the position of operation v on its current machine m(v). We consider

the following neighborhood structures for the VNS approach:

• NTk - move the selected operation to a position within the range

[max {posv − r, 0} ,min {posv + r, nmax}]

on a randomly chosen machine from the machine group M(v). The quantity nmax

is the maximum number of operations on the machine. Such a transfer is repeated

k times.

• NSk - swap the selected operation v and a randomly selected operation at position

within the range

[max {posv − r, 0} ,min {posv + r, nmax}]

on a randomly selected machine from M(v). Repeat the procedure k times.

Only feasible moves with respect to the precedence constraints are considered. We also

make use of combining the neighborhood structures:

• NSk N

Tk - apply NT

k first and then apply NSk ;

• NTk N

Sk - apply NS

k first and then apply NTk .

These neighborhood structures have to be understood in the sense of the composition

of mappings.

Our preliminary computational results indicate that large values of the parameter r

might have a negative impact on the efficiency of the search procedure. The parameter

k is selected according to the distribution k ∼ DU [1, kmax]. The values selected for r

and kmax are provided in Section 3.4.3. We implement one of the VNS-variants which

is referred to as a skewed VNS (cf. Hansen and Mladenovic 2001). In this case, the

VNS algorithm is able to accept slightly worse solutions with some small probability γ,

defined in the same way as in Algorithm 3.3, in order to avoid trapping in local optima.

Algorithm 3.6 describes the search procedure.

The number of iterations or the amount of computing time can be used as a

stopping criterion for the algorithm. There are several different neighborhood structures

considered. In contrast to Driessel and Monch (2011), we do not consider a fixed

Page 65: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 46

Algorithm 3.6 Skewed VNS for SSP solution.

1: Find an initial SSP solution s2: Set k ← 1, x← s3: repeat4: Shake. Generate a new solution x′ ∈ Nk(x)5: Local Search. Apply the LS to x′ resulting in a new solution x′′

6: Improvement or not. If x′′ is better than s then set s← x′′

7: Move or not.8: if x′′ is better than s, or move is allowed then9: Set x← x′′, k ← 1

10: else11: Set k ← k mod kmax + 112: end if13: until Stopping condition is met

sequence of applying the neighborhood structures. We define the neighborhood structure

Nk as

Nk =

NSk , with probability p1;

NTk , with probability p2;

NSk N

Tk , with probability p3;

NTk N

Sk , with probability p4.

(3.9)

The probabilities in Equation (3.9) are defined in Section 3.4.3.

To ensure feasibility of the generated subproblem solutions during the iterations of

Algorithm 3.6, we move some operation v only if not one of its direct predecessors

(successors) is sequenced after (before) v on the target machine. The same condition

must hold for swapping the operations. We use the following more formal description of

these rules (cf. Monch et al. 2007). For the currently considered machine group Mi and

for any v ∈ O(Mi), we denote by MP (v) ⊆ O(Mi) the set of direct predecessor nodes of

v which can be processed by any machine from machine group Mi. Two nodes belonging

to the same subproblem are in a direct precedence relation if each path connecting the

related nodes passes only nodes that belong to other subproblems. When scheduling

operation v, no cycles are created in the disjunctive graph if all operations of MP (v) are

scheduled before v. If at least one operation of MP (v) is not yet scheduled then we say

that v is unavailable. Otherwise, v is available. In order to determine the sets MP (v)

for any v ∈ O(Mi), we apply breadth-first search (BFS). The algorithm starts from a

node v with MP (v) := ∅ and considers all of its direct predecessor nodes u ∈ P (v).

If u ∈ O(Mi) then MP (v) := MP (v) ∪ {u} and P (u) is not considered further. If

u /∈ O(Mi) then all nodes of P (u) are queued for further processing. The BFS algorithm

stops if there are no more nodes to consider.

The iterative LS technique described in Algorithm 3.3 is applied during the VNS

algorithm for solving the subproblems.

Page 66: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 47

3.4 Computational Experiments

3.4.1 Design of Experiments

A variety of benchmark problem instances plays an important role in comparing different

solution approaches for the stated problems. In our computational study, we consider

several sets of problem instances which are described in the related literature. To the

best of our knowledge, only a few papers deal with FJS scheduling and tardiness-related

objectives (cf. Brandimarte 1993, Yang et al. 2000, Scrich et al. 2004). Only a small

number of relevant problem instances introduced by Brandimarte (1993) are publicly

available. This motivates us to generate additional large-size problem instances used in

this research. We summarize the main characteristics of the considered problem instances

in Table 3.1.

Table 3.1: Main characteristics of the problem instances for FJS.

Factor \Level Set 1 Set 2 Set 3 Set 4

Machines 5-15 4-15 5-10 25 - 30

Machine groups 1 1 1 6, 10

Environment Rm Rm Rm Rm, Pm

Flexibility level 1.7 - 2.95 1.43 - 4.1 1.13 - 5.02 2.5, 5

Order weights wis ∼ DU [1, 3] wis =

4 with probability 0.2

2 with probability 0.6

1 with probability 0.2

Due date tightness g - 0.25, 0.5, 0.75

Orders 10-30 10-20 10-20 20

Operations per order 5-15 5-15 5-25 80-120

Problem instances 5 30 54 120

The due dates for the problem instances from Set 2, Set 3, and Set 4 are generated

using a flow factor FF . The due date of the order Ois is given by

dis = bgFFpisc , (3.10)

where pis is the average raw processing time (RPT) of Ois and g ∈ (0, 1] is a due date

tightness factor. Appropriate FF ≥ 1 values are determined by FIFO scheduling. We

can see from expression (3.10) that the maximum waiting time allowed for Ois to meet

its due date is bgFFpisc − pis. The weights are generated according to the scheme used

by Pinedo and Singer (1999). The weight setting is shown in Table 3.1.

Furthermore, we give a more detailed overview of the different sets of problem instances.

Page 67: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 48

Set 1

This set of problem instances is proposed by Brandimarte (1993) for FJm||TWT .

It consists of five instances. The corresponding TWT values of the solutions are

documented. The smallest problem instance contains five machines and ten jobs. The

largest one contains 15 machines and 30 jobs. This corresponds to 50 and 354 operations

in the resulting solution graph G (see Section 3.1). The main characteristics of these

problem instances are summarized in Table 3.1. The generation of due dates and weights

for the problem instances is described in the original paper. We refer to this set of

instances as Set 1.

Set 2

Another set of publicly available problem instances is described by Brandimarte (1993)

as well. These problem instances are denoted by Mk01, . . . , Mk10 and are originally

generated for FJm||Cmax. We adopt these instances to the TWT objective by

additionally generating a due date and a weight for each job. These values are generated

according to the scheme presented in Table 3.1. Based on the original problem instances,

we receive 30 instances for FJm||TWT . When generating the due dates, we use different

due date tightness factors, namely g = 0.25 for tight due dates, g = 0.5 for moderate

due dates, and g = 0.75 for loose due dates. The number of machines depending on the

problem instance varies between 4 and 15. The number of jobs is between 10 and 20. The

total number of operations varies between 55 and 240. For comparison, a 10×10 problem

instance for job shop scheduling described in Pinedo and Singer (1999) corresponds to

100 operations. We notice that the machines within the instances of Set 2 are not

divided into machine groups.

Set 3

The original set of problem instances for makespan optimization in FJSs consisting

of 18 instances and referred to as 01a, 02a, . . . , 18a is presented in the work of

Dauzere-Peres and Paulli (1997). Based on this set, we generate 54 problem instances

for TWT optimization. We apply the same principles for generating the job weights and

due dates as for the problem instances of Set 2. Depending on the problem instance, the

number of machines varies between 15 and 25, whereas the number of jobs is between 10

and 20. Each job can contain from 15 to 25 operations. The smallest problem instances

result into 196 operations, and the largest ones correspond to a graph with 387 nodes.

A more detailed generation scheme is summarized in Table 3.1. We notice that the

machines are not explicitly divided into machine groups with disjoint functionality.

Page 68: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 49

Set 4

In addition to the already described problem sets, we generate a set of large-scale problem

instances to assess the performance of the proposed scheduling algorithms. A detailed

generation scheme of the problem instances is described in Table 3.1. Moreover, we

describe the parameter settings which are considered.

As described in the problem setting (see Section 3.1), we consider a set of machine groups

M ={M1, ...,M|M |

}.

Unlike the cases of Set 1, Set 2, and Set 3, it is assumed that these machine groups

have disjoint functionality. In other words, it is possible to identify machine groups with

unique sets of operations, O(Mi) ∩O(Mj) = ∅, ∀i, j, i 6= j.

Following the ideas presented by Scrich et al. (2004), we consider two levels of flexibility

in the production system. Under a flexibility level we understand the average number of

processing alternatives per operation. In our study, a high flexibility level corresponds to

five machines per machine group. In this case, the number of machines in each machine

group is selected as a random integer number uniformly distributed between four and

six, i.e., |Mi| ∼ DU [4, 6], ∀Mi ∈ M . The low flexibility level presumes 2.5 alternative

machines per operation on average, i.e., |Mi| ∼ DU [2, 3], ∀Mi ∈M . For the case of high

flexibility we generate six machine groups and for the case of low flexibility the number

of machine groups is equal to 10. Such an approach helps balancing the total number of

machines across all problem instances.

A set of 20 randomly generated orders have to be scheduled for each problem instance.

The number of operations per job is discretely uniformly distributed between 80 and

120 so that the expected number of operations per order is 100. A randomly selected

machine group is assigned to each operation. For any operation o ∈ O(Mi) and any

machine mij ∈ Mi, the processing time pjo of operation o on machine mij is chosen as

pjo ∼ DU [1, 30], i.e., unrelated machines are considered. In the case of identical machines

Pm, we have pjo = pko = po ∼ DU [1, 30], ∀mij ,mik ∈Mi, ∀Mi ∈M .

To summarize, we generate a set of problem instances using twelve factor combinations

by considering two types of machine groups (Rm or Pm), two levels of flexibility (2.5 or

5), and three levels of order due date tightness (g = 0.25, g = 0.5, g = 0.75). For each

parameter combination, we randomly generate ten independent replications. Totally, we

obtain 120 problem instances. The expected number of operations is on average around

2000. This makes the generated instances considerably larger than any instance from

any other described problem set. All the instances with identical machines are collected

in Set 4a, while the instances with unrelated machines belong to Set 4b.

Page 69: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 50

Short summary on the considered problem instances

An overview of the essential characteristics of the considered problem instances is given in

Table 3.1. The total amount of problem instances is 209. Note that only Set 4 contains

machine groups having disjoint functionality. As a result, the variations of the SBH

procedure are not applicable to problem instances from Set 1, Set 2, and Set 3 since

the SSPs can not be formulated uniquely. For this reason, we treat all resources as one

group of parallel machines with eligibility restrictions. Such approach enables us to apply

the VNS which we have originally developed for solving SSPs. An extension of the VNS

scheme to take into account machine eligibility restrictions is straightforward. However,

we have to check whether the operations that are moved or swapped lead to a cycle in

the resulting graph or not.

3.4.2 Measure of Relative Improvement

The considered heuristic algorithms usually find solutions which are not necessarily

optimal. Since the optimal solutions are likely not known in advance, other methods

have to be used to assess the results. Let H and Href denote two solution methods and

ΦπH and Φπ

Hrefbe the objective values corresponding to the solutions found by H and

Href for the problem instance π ∈ Π, respectively. Here, Π denotes the considered set

of problem instances. We define the relative improvement of H over Href for instance

π ∈ Π as

δHHref(π) = 1−

ΦπH

ΦπHref

. (3.11)

The improvement measure of H over Href over the set of problem instances Π is defined

as

δ =1

|Π|∑π∈Π

δHHref(π), (3.12)

where |Π| denotes the number of problem instances in this set. The value of δHHref(π) is

an estimate of how much the solution quality for the problem instance π can be improved

if H is applied instead of Href . The measure (3.12) represents the relative improvement

over the complete set of problem instances Π in the case Href is replaced by H.

It is important to notice that these measures capture the relative performance of two

solution methods. This is beneficial since the objective values for two different problem

instances can have different magnitudes. The presented performance measure allows for

eliminating the effect of domination of the large-scale objective values over the moderate-

or the small-scale ones.

Page 70: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 51

3.4.3 Computational Results

In our computational study, we assess the performance of different scheduling methods

based on the described sets of problem instances. In practice, scheduling is often

performed based on dispatching rules. Therefore, it is reasonable to benchmark the

more advanced scheduling methods against these heuristics. We notice that the relevant

TWT values for problem instances from Set 1 are reported in (Brandimarte 1993). We

compare the performance of our methods with these values.

All algorithms are implemented using the C++ programming language, and the

computational experiments are performed on a personal computer with an Intel Core i5

2.66 GHz CPU and 4 GB of RAM.

When a heuristic contains stochastic ingredients, we perform five replications with

different seeds to obtain statistically reasonable results. The corresponding TWT values

of the different runs are averaged.

For Set 1, Set 2, and Set 3, we analyze the performance of the following heuristics:

• ATC - computes a schedule using an appropriate value of the look-ahead parameter

κ. To determine κ, we consider a grid defined as 0.1k, k = 1, ..., 70. The flow factor

FF is determined based on the best schedule found by SDR.

• CS - provides the best schedule found by SDR and ATC.

• LS(itermax) - is the iterative LS procedure which uses the schedule provided by

CS as its starting point. The LS itself performs itermax iterations. We consider

two variations of this approach: LS(20000) and LS(200000) which are abbreviated

as LS(20k) and LS(200k), respectively.

• VNSG - the modified version of the VNS approach used for solving the SSPs. Since

this implementation is more global, we abbreviate it as VNSG. The number of LS

iterations in each iteration of VNSG is 3000.

Based on some preliminary experimentation, we discovered that itermax > 200000

does not lead to significant performance improvements of LS(itermax). The preliminary

experiments also reveal that the total number of 100 VNSG iterations is sufficient to

receive high-quality solutions.

We find updating the set Vcr every iter∗ = 100 iteration is a compromise between the

solution quality and the computing time needed by LS.

Following Mati et al. (2011), we select the number of transitions in the diversification

phase of LS according to the distribution λ ∼ DU [5, 10]. The diversification phase

finishes as soon as a better solution is found. This process starts if the current solution

is not improved over the last η = 2000 iterations of the search algorithm.

Page 71: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 52

Based on the computational results, we set r = 2 and kmax = 5 when solving the

SSPs within the SBH procedure. The probabilities p1 = 0.5, p2 = 0.4, p3 = 0.05, and

p4 = 0.05 are selected based on some preliminary experimentation. We notice that

the neighborhood structures NSk and NT

k are the most important to find high-quality

solutions.

The performance analysis of the SBH procedure is given only for Set 4 since several

SSPs arise only in this situation.

To compare the performance of different solution methods, we use the relative

improvement measure described in Subsection 3.4.2.

Computational results for Set 1

Set 1 consists of five problem instances for FJS scheduling for which the corresponding

TWT values are reported in the original article. The resulting TWT values found by

the methods applied to Set 1 are presented in Table 3.2.

Table 3.2: TWT values for Set 1.

ProblemsMethods

tab1 ATC CS LS(20k) LS(200k) VNSG

WT1 137 159 158 84 63 60

WT2 368 1072 1072 545 282 279

WT3 1666 812 812 313 140 137

WT4 307 234 112 112 112 112

WT5 1209 288 215 166 166 166

Average 737 513 473 244 152 150

The best reported results are found by a TS denoted as tab1. It is interesting to note

that the results obtained by the ATC rule are on the average better than the original

ones. This can be explained by a possibly different implementation of the dispatching

rule and a slightly different selection of the parameters κ and FF . The computations

clearly show that the known results in the column tab1 can be significantly improved.

This is caused by the hierarchical approach which discovers the routing of the jobs

prior to their sequencing on the machines (cf. Brandimarte 1993). Only after fixing the

initial routing, the results are refined by TS. Our heuristics tackle both the routing and

the machine sequencing problems simultaneously. The heuristics LS(200k) and VNSG

clearly outperform the remaining approaches. We also notice that VNSG is able to find

slightly better results than the LS(200k) scheme. However, the difference between the

results of the heuristics is minor. The average computing time per problem instance

for the LS(200k) approach is around 30 seconds, whereas the VNSG algorithm requires

more time to find a good solution, namely around 44 seconds per problem instance. The

Page 72: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 53

larger computational burden of VNSG is the result of applying the LS scheme in every

iteration of the algorithm.

Computational results for Set 2

There are no previously reported TWT values for the problem instances from Set 2.

Therefore, we assess the performance of our heuristics against the best SDR-based list

scheduling approach.

The results of the computational experiments for Set 2 depending on the tightness of

the due dates are summarized in Figure 3.8.

Figure 3.8: Improvements δ depending on the due date tightness g for Set 2.

We clearly see that the advanced scheduling approaches can outperform the SDR

significantly at different levels of the due date tightness factor g. The performance of

the iterative methods considerably increases as the due dates become looser. This effect

can be explained by the decreasing TWT values and the more advanced search abilities

of these methods, while due date-oriented dispatching rules tend to make poor decisions

in the beginning of the scheduling process in the case of loose due dates. We notice that

ATC and CS perform significantly better than SDR, but not as well as LS and VNSG.

The results confirm that CS usually performs slightly better than ATC, especially if

the due dates are loose. A reason for this is the exponential term in the ATC priority

index which can decrease very fast as the due dates increase. This can lead to wrong

prioritization of the operations, especially at the beginning of the scheduling process.

Therefore, some other dispatching rules perform slightly better than ATC.

The heuristics LS(200k) and VNSG outperform the other methods. At the same time,

VNSG seems to be able to find slightly better solutions than LS(200k). This is the result

of its ability to escape local optima more effectively. The overall results for Set 2 are

provided in Table 3.3.

Page 73: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 54

Table 3.3: Overall improvements δ for Set 2.

Method ATC CS LS(20k) LS(200k) VNSG

Improvement δ 0.14 0.17 0.31 0.41 0.43

As indicated in Table 3.3, the heuristic LS(20k) performs considerably better than any

dispatching rule, but worse than LS(200k) and VNSG. The computing time needed by

LS(20k) to determine a solution is on average 1.3 seconds per problem instance, making

it a trade-off between computing time and solution quality. The average computing time

required by LS(200k) and VNSG is around 15 and 30 seconds, respectively.

Computational results for Set 3

Similar to the case of Set 2, we present computational results for the problem instances

from Set 3. These results are given in Figure 3.9.

Figure 3.9: Improvements δ depending on the due date tightness g for Set 3.

The overall performance values are summarized in Table 3.4.

Table 3.4: Overall improvements δ for Set 3.

Method ATC CS LS(20k) LS(200k) VNSG

Improvement δ 0.24 0.24 0.30 0.33 0.33

The computational results in Table 3.4 are similar to the ones for Set 2. The simple

dispatching rules are significantly outperformed by ATC, CS, and the iterative heuristics.

Heuristics LS(200k) and VNSG are again able to find considerably better solutions, and

LS(20k) is a compromise between the required computing time and the solution quality.

If the due dates are loose then LS(200k) seems to slightly outperform VNSG. But the

overall results for both heuristics are very similar. We limit the computing time for the

Page 74: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 55

heuristics to 30 seconds per problem instance. However, LS(200k) needs on average only

25 seconds to perform 200000 iterations. The time required by LS(20k) is around two

seconds per problem instance.

Summary on the results for Set 1, Set 2, and Set 3

The computational results obtained after executing the algorithms for all problem

instances from Set 1, Set 2, and Set 3 are interesting from several points of view. The

original problem instances are taken from different sources and thus, they have different

structural properties such as different machine environments and different routings. This

leads to a more complete picture of how the proposed heuristics perform. These results

also provide an estimate of the computing effort needed to solve the SSPs in the SBH

procedure. This is due to the fact that the size of the problem instances in these sets are

comparable with the size of subproblems while solving the instances from Set 4 using

the SBH. We also point out that the obtained improvements are conform to the results

for the TT performance measure presented in (Scrich et al. 2004).

Computational results for Set 4

In the final step, we benchmark the proposed heuristic algorithms based on the large-

scale problem instances. We recall that the machines can be split into different machine

groups with disjoint functionality. Each machine belongs to exactly one machine group.

Hence, the SBH can be applied. We analyze the influence of different factors on the

performance of the proposed solution methods.

The following modifications of the SBH are considered in our computational study:

• SBH-LS - a variation of SBH where the SSPs are solved using LS(20k). During the

reoptimization phase the heuristic LS(50k) is applied. In addition, the computing

time for each SSP is limited to 30 seconds.

• SBH-VNS - a modification of the SBH where each subproblem is solved by the

VNS algorithm described in Subsection 3.3.3. LS(50k) is used for reoptimization.

The computing time limit for each SSP is again 30 seconds.

• SBH-LS-LI - a simplification of SBH-LS where only one reoptimization pass is

performed after the SBH is completed.

We start our analysis by discovering the behavior of the scheduling algorithms for

problem instances from Set 4a. Figure 3.10 shows the average improvements over the

best SDR depending on the flexibility level in the production system.

The described heuristics perform significantly better compared to the different described

dispatching rules, provided the flexibility level is low. If the flexibility level is high

Page 75: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 56

Figure 3.10: Improvements δ for Set 4a depending on the flexibility level.

then the improvement potential seems to be limited. This effect can be explained quite

intuitively. At a high level of processing flexibility, it is easier to make operation-to-

machine assignment with the succeeding sequencing of the operations. Assuming that

there are enough resources provided in the production system and taking into account

the fact that the machines within the machine groups are identical, the competition for

scarce resources between the operations belonging to different orders or jobs is not very

high. Many operations can be started on the parallel machines almost simultaneously.

Thus, a decision which operation to selected first for processing on each machine

becomes less important. This also explains why the scheduling decisions made based

on the SDR lead to fairly high-quality solutions and leave less improvement potential

for the advanced algorithms. On the other hand, a lower flexibility level causes many

operations to compete for the scarce resources. This increases the importance of high-

quality scheduling decisions.

The relative performance of the algorithms can vary depending on the due date tightness

of the orders. Table 3.5 shows explicitly the performance δ of the algorithms at the

flexibility level 2.5. The results are presented depending on the due date tightness g.

The overall performance in view of different levels of g regardless the flexibility level is

represented in Table 3.6.

The relative performance of the algorithms improves as the due dates become looser. A

possible explanation of such behavior is given above. Regardless the tightness factor g,

the proposed algorithms are able to find high-quality solutions. The results corresponding

to the flexibility level 2.5 are particularly interesting from a practical point of view since

in real-life machines are often expensive. Therefore, a setup where the operations strongly

compete for limited resources is quite common.

Our computational results show that the algorithms SBH-LS and SBH-VNS are able to

find solutions of a better quality, compared to the other algorithms. However, the results

Page 76: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 57

Table 3.5: Improvements δ depending on g for Set 4a.

Due datetightness

g=0.25 g=0.50 g=0.75

ApproachFL=2.5

#BS

all#BS

FL=2.5

#BS

all#BS

FL=2.5

#BS

all#BS

ATC 0.08 - 0.05 - 0.12 - 0.07 - 0.23 - 0.13 -

CS 0.08 - 0.05 - 0.12 - 0.08 - 0.23 - 0.14 -

LS(20k) 0.10 - 0.06 - 0.15 - 0.09 - 0.27 - 0.16 -

LS(200k) 0.11 1 0.07 4 0.16 0 0.10 3 0.29 2 0.18 4

VNSG 0.11 2 0.07 3 0.16 1 0.10 3 0.30 1 0.18 3

SBH-LS-LI 0.10 - 0.06 - 0.15 - 0.09 - 0.25 - 0.16 -

SBH-LS 0.12 2 0.07 2 0.17 2 0.11 3 0.30 4 0.19 4

SBH-VNS 0.12 5 0.07 11 0.18 7 0.11 11 0.31 3 0.20 9

Table 3.6: Overall improvements δ for Set 4.

Set 4a 4b

ApproachFL=2.5

#BS

all#BS

FL=2.5

#BS

all#BS

ATC 0.14 - 0.08 - 0.27 - 0.21 -

CS 0.14 - 0.09 - 0.27 - 0.21 -

LS(20k) 0.17 - 0.10 - 0.30 - 0.24 -

LS(200k) 0.19 3 0.11 11 0.31 7 0.25 23

VNSG 0.19 4 0.12 9 0.33 9 0.26 11

SBH-LS-LI 0.17 - 0.10 - 0.26 - 0.19 -

SBH-LS 0.20 8 0.12 9 0.28 8 0.21 18

SBH-VNS 0.20 15 0.13 31 0.28 6 0.20 8

are close to the ones of LS(200k). A practically important indicator is the computing

time required to find a high-quality solution. Table 3.7 presents these values for each of

the algorithms.

Table 3.7: Average computing time per problem instance for Set 4a.

Method CSLS20k CSLS200k SBH-LS-LI SBH-LS SBH-VNS

Time (s) 11 110 110 400 650

The time required by LS increases nearly linearly with the number of iteration. The

most time-consuming scheduling procedures are SBH-LS and SBH-VNS requiring 400

Page 77: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 58

and 650 seconds of computing time per problem instance, respectively. Such a fairly

large computational burden is mainly caused by the need to formulate and to find high

quality solutions of the SSPs (cf. Demirkol et al. 1997), and to perform reoptimization

in each step of the SBH procedure. We notice that the expected sizes of the SSPs for

Set 4 are comparable with the problem sizes of the instances from Set 2 and Set 3.

Despite being the most time-consuming method in our experiments, the SBH procedure

is able to find solutions of the highest quality.

The heuristic SBH-LS-LI requires considerably less computing time than SBH-LS or

SBH-VNS. But it is slightly outperformed by LS(200k). This confirms the importance

of the reoptimization step in SBH-type algorithms.

Summarizing the results for the problem instances from Set 4a, we conclude that

LS(200k) is a fast algorithm able to find high-quality solutions at a relatively low

computing cost. VNSG requires more computing time to obtain slightly better results.

However, it is less time-consuming than the SBH-based heuristics. If more time is

available for the scheduling decisions then SBH-VNS or SBH-LS are preferable.

Next, we present the results for Set 4b for the two flexibility levels in Figure 3.11.

Figure 3.11: Improvements δ for Set 4b depending on the flexibility level.

As in the case of identical machines, the improvement potential is clearly larger when

the flexibility level is lower. It is interesting to notice that the improvement rates are

considerably larger when the machines in the machine groups are not identical, even at

a high flexibility level. Note that for the Rm environment, the processing time depends

on the machine and the job. Hence, an appropriate assignment of jobs to machines is

crucial. As a result, scheduling decisions are more important than in the case of Pm.

One additional interesting insight is that the different SBH variants are clearly

outperformed by LS and VNSG. This result might seem to be surprising. But as already

mentioned above, the processing time of each operation can differ depending on the

Page 78: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 59

underlying machine. As long as a machine group is not scheduled within the SBH,

the exact processing times of the corresponding operations are not known in advance.

The expected processing times are used for the calculations instead. Thus, the SSPs

are formulated based on the over- or underestimated values. At the beginning of the

scheduling process the subproblems are affected most. Therefore, the solutions of the

subproblems (especially for the bottleneck machine groups) are far from those which

correspond to an optimal or a nearly optimal schedule. This issue can not be fixed until

all machine groups are scheduled. Only then the processing times can be estimated more

accurately. This effect leads to the SBH procedures being outperformed by the iterative

search-based heuristics.

The improvements for tight and moderate due dates are provided in Table 3.8. Again, we

report the results for the flexibility level of 2.5 in addition of the results for all problem

instances.

Table 3.8: Improvements δ for Set 4b depending on g.

Due datetightness

g = 0.25 g = 0.50

ApproachFL=2.5

#BS

all#BS

FL=2.5

#BS

all#BS

ATC 0.20 - 0.14 - 0.34 - 0.27 -

CS 0.20 - 0.14 - 0.34 - 0.27 -

LS(20k) 0.22 - 0.17 - 0.38 - 0.32 -

LS(200k) 0.23 5 0.17 14 0.39 2 0.33 9

VNSG 0.24 3 0.17 4 0.43 5 0.34 6

SBH-LS-LI 0.19 - 0.12 - 0.33 - 0.26 -

SBH-LS 0.20 1 0.13 1 0.36 2 0.29 3

SBH-VNS 0.20 1 0.13 1 0.35 1 0.27 2

We see that problem instances with moderate due dates lead to larger improvements.

Table 3.6 depicts the overall performance of the heuristics for Set 4b instances compared

to the results provided by the SDR. LS(200k) and VNSG outperform the remaining

heuristics. The performance of LS(20k) is similar, while it is computationally less

expensive. The SBH-LS performs slightly better than SBH-VNS. Again, this behavior

is caused by the poor release and due date information to identify the subproblems.

We continue by presenting results for loose due dates, i.e., g = 0.75, in Figure 3.12. On

one hand, the large improvements can be explained by the small TWT values. On the

other hand, the performance of due date-oriented dispatching rules, including the ATC

rule, is rather low in this situation. Figure 3.12 indicates that the SBH-type algorithms

outperform the two LS variants and the VNSG. This behavior is caused by the low

Page 79: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 60

Figure 3.12: Improvements δ for Set 4b and g = 0.75.

quality of the initial solutions for LS(20k), LS(200k), and VNSG determined by the CS

approach. The SBH does not require any initial solution.

Finally, we look at the impact of the additional computing effort on the behavior of the

best performing algorithms. For this purpose, we consider the heuristics LS, VNSG, and

SBH-LS with computing time limits of 110, 400, and 650 seconds per problem instance.

The corresponding improvements over SDR are reported in Table 3.9. In addition, the

number of best solutions, abbreviated by #BS, found by a specific heuristic, and the

amount of computing time are shown.

Table 3.9: Performance of LS, VNSG, and SBH-LS depending on the computing time.

Heuristic LS VNSG SBH-LS

Computingtime

δ#BS

δ#BS

δ#BS

110s 0.21 53 0.21 23 0.20 44

400s 0.23 55 0.22 12 0.24 54

650s 0.23 35 0.23 18 0.26 68

The LS and VNSG heuristics outperform SBH-LS if a small amount of computing time

is available. VNSG is not able to perform 100 iterations within 110 seconds of computing

time. Consequently, SBH-LS performs slightly worse than LS and VNSG. However, the

increase of computing time allows the SBH to perform better than LS. This behavior is

caused by the ability of the SBH to compute partial schedules that are later improved by

LS in the reoptimization step. The LS heuristic outperforms VNSG when the computing

time increases. This can be explained by the fact that LS tries to improve the already

found solutions, whereas VNSG is more oriented towards exploring new regions of the

search space. Thus, VNSG is able to determine solutions that might be significantly

Page 80: Integrated Process Planning and Scheduling in Flexible Job

Chapter 3. Heuristics for scheduling jobs in flexible job shops 61

different from the incumbent. This is beneficial in case of problem instances with small or

moderate size. For large-size instances, the LS as part of the VNSG scheme is not always

able to improve the solution that is a result of the shaking step. This leads to solutions of

lower quality. Increasing the number of LS iterations within VNSG might help avoiding

this issue. All of the problem instances for Set 1-4 and the corresponding best TWT

values are publicly available at http://p2schedgen.fernuni-hagen.de/index.php?id=174.

Summary of the computational experiments

Next, we summarize our main findings from the computational experiments to obtain

guidelines how to apply the various heuristics:

1. The number of machines available for each operation strongly influences the TWT

improvement obtained when the more advanced heuristics are compared with list

scheduling. A large flexibility, ensured by many alternative machines, reduces the

advantage of more sophisticated heuristics. The improvement in terms of the TWT

is smaller if identical machines are used.

2. For identical machines, the SBH-type heuristics outperform the remaining solution

approaches. When the machine environment consists of unrelated machines the LS-

based approaches perform better with the exception of loose due dates where the

SBH-based approaches perform best.

3. The SBH hybridized with the LS approaches is able to outperform the LS-

based approaches, provided enough computing time is available. However, if the

computing time is a concern then the iterative LS has to be chosen.

Based on the solution approaches described in this chapter, we continue investigating

different solution methods for the IPPS problem in Chapter 4.

Page 81: Integrated Process Planning and Scheduling in Flexible Job
Page 82: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4

Heuristics for Integrated Process

Planing and Scheduling

In Chapter 3, we consider heuristics for a special case of the IPPS problem (2.4) where

the process plans are already selected and fixed for each product. Thus, a predefined

set of operations with the precedence constraints between them is scheduled on the

machines with the aim to minimize the TWT for the orders. In this chapter, we develop

heuristics for the IPPS problem which consider the process planning level as well. We

describe a decomposition technique to decouple the selection of process plans from

machine scheduling. After selecting a process plan for each product, the heuristics

from Chapter 3 are applied to evaluate the set of the process plans by calculating

the corresponding TWT values. We are interested to find a set of process plans that

leads to the smallest TWT value. This chapter is organized as follows. First, we briefly

recall the statement of the IPPS problem and discuss the related work about different

solution approaches for the problem in Section 4.1. The iterative decomposition scheme

for IPPS is presented in Section 4.2. We develop a VNS scheme for the IPPS problem

in Subsection 4.3.1. A memetic algorithm from the literature used for benchmarking

purposes is described in Subsection 4.3.2. Finally, we present several sets of problem

instances and the corresponding computational results for the considered heuristics in

Section 4.4.

4.1 Problem and Related Work

In this section, we consider different heuristics for the IPPS problem (2.4) stated in

Subsection 2.4.1. We briefly recall the problem statement. The modeled manufacturing

resources consist of groups of parallel machines, i.e., machine groups with identical as well

as unrelated machines are considered. The machine groups have disjoint functionality.

We model a manufacturing process for a set of different products Pi. There can be

63

Page 83: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 64

several orders Ois for each product Pi. The products can have alternative manufacturing

processes described by the corresponding BOMs Biz. Each BOM consists of a set of

subparts. The subparts can have alternative routes. For each product, an appropriate

process plan has to be selected and the resulting operations have to be scheduled on

the machines with the aim to minimize the TWT for the orders. Note that if all of the

process plans are selected and fixed then the problem (2.4) reduces to the FJS scheduling

problem (3.1). The appropriate heuristics for FJS scheduling are described in Chapter 3.

We continue by discussing the related literature with respect to the three classes of

approaches for IPPS (see Subsection 2.3.1). We mainly focus on NLPP, because the

approach proposed in the present chapter has mostly NLPP features. A hierarchical

approach is proposed by Brandimarte and Calderini (1995) where the upper level selects

process plans, and the lower level deals with job shop scheduling. On the process

planning level, the process plans are represented by chains of macro-operations. Each

macro-operation refers to one of the alternative sequences of operations. Operation

cost and makespan are considered as objectives. Kim and Egbelu (1999) consider an

IPPS problem with makespan objective. Heuristics are proposed that combine branch-

and-bound techniques with a MIP approach or dispatching rules for scheduling.

Weintraub et al. (1999) use TS to select process plans for a large-scale job shop with the

maximum lateness criterion. Instead of solving the resulting job shop scheduling problem

for each move, a lower bound is used. An SA approach for IPPS on the scheduling level

is discussed by Li and McMahon (2007). Neighborhood structures are considered which

change the process plans. Population-based metaheuristics are often proposed for IPPS

problems. Kim et al. (2003) design an evolutionary algorithm for IPPS with makespan

and mean flow time criterion. Park and Choi (2006) propose a GA which performs

process planning and scheduling simultaneously. A job shop environment with makespan

criterion is used. A particle swarm algorithm for IPPS is proposed by Guo et al. (2009)

for several objectives. Lian et al. (2012) describe an imperialist competitive algorithm

for IPPS with makespan criterion. A typical CLPP approach is given by the IPPS

system for holonic manufacturing proposed by Sugimura et al. (2007). A GA for Cmax

and mean tardiness is described by Phanden et al. (2013). The proposed system consists

of several modules, among them a module for process plan selection, for scheduling, and

for process plan modification. Simulation is used to determine the performance of the

schedules. The simulation models are generated using information on the current shop

floor status. Because of this feature, the approach contains characteristics of both NLPP

and CLPP.

Next, we discuss a few examples for DPP. Most of the recent approaches which belong

to this class are based on MAS because software agents can be implemented in a

distributed manner. Huang et al. (1995) propose a progressive approach for IPPS. A GA

for makespan minimization is proposed by Shao et al. (2009). This approach hybridizes

DPP with NLPP. Leung et al. (2010) describe an agent-based ACO scheme for IPPS

which contains features of both NLPP and DPP. Another agent-based implementation

Page 84: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 65

that uses a GA for solving the integrated problem is proposed by Li et al. (2010b).

Because population-based approaches tend to be time-consuming, we apply a VNS-

based approach to select BOMs and routes for each product. It is well known that

VNS or iterated LS approaches tend to outperform population-based approaches or

neighborhood search approaches like SA or TS (cf. Voß and Woodruff 2006). The

proposed approach requires that an FJS scheduling instance is solved to evaluate a

move. Therefore, we use the efficient heuristics proposed in Chapter 3 for the TWT

measure.

4.2 Iterative Scheme for IPPS

Following Brandimarte and Calderini (1995), the IPPS problem can be decomposed into

two subproblems:

1. Determine process plans by selecting BOMs and routes for the products.

2. Schedule the operations of the subparts for the chosen process plans.

According to this approach, the process planner receives feedback from the scheduler

which in its turn can analyze feedback from the machines. Our overall IPPS scheme is

depicted in Figure 4.1.

Process plansAssessment of the

process plans

Process Planner

- selects routes

- selects BOMs

- analyzes feedback from the scheduler

- sends process plan to the scheduler - terminates the process planning cycle - informs the scheduler about terminating the process planning cycle

Scheduler

- calculates the objective function value of the schedule

- determines schedules based on the proposed process plan, i.e., a set of BOMs and routes from the process planner

- sends the result of assessing the current process plan to the process planner

- sends the final schedule to the shop floor after the process planning process is terminated

Final schedule for execution

Status of the shop floor

Shop Floor

Figure 4.1: Iterative decomposition scheme for IPPS.

Page 85: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 66

Note that this scheme does not give any description of the shop floor itself since we

are only interested in the integrated approach in this chapter. We will investigate the

extended model taking into account a dynamic production environment in Chapter 5.

The process planner is able to select and compare process plans based on the feedback

from the scheduler. We are interested in selecting a process plan which leads to the

smallest TWT value for the given orders.

The procedure starts by selecting an initial process plan. The quality of this plan is

assessed by the scheduler. Based on the TWT value, the planner then iteratively selects

new process plans which are evaluated by the scheduler. Each combination of a process

plan and a corresponding schedule can be found by this method. The decomposition of

the problem into two subproblems allows for applying sophisticated algorithms on both

the process planning and the scheduling level.

4.3 Algorithms for Process Planing

4.3.1 VNS Scheme

VNS is an LS-based metaheuristic which enriches a simple LS method to enable it to

escape from local optima. The LS is restarted from a randomly chosen neighbor of the

incumbent solution whenever it gets trapped in a local optimum. This restarting step

is called shaking. It is performed with different neighborhood structures. The approach

used in this work is based on skewed reduced VNS (SRVNS) described in Algorithm 4.1

according to (Hansen and Mladenovic 2001).

Algorithm 4.1 SRVNS Scheme

1: Initialization: Select neighborhood structures Nk, k = 1, ..., kmax, find an initialsolution x and its objective value f(x). Initialize x∗ ← x. Choose a stopping conditionand a parameter value λ. Set k ← 1.

2: repeat3: Shaking : Choose randomly x

′ ∈ Nk(x).4: Improvement or not :5: if f(x

′) < f(x) then

6: Set x∗ ← x′.

7: end if8: Move or not :9: if f(x

′)− λρ(x∗, x) < f(x) then

10: Set x← x′, and k ← 1.

11: else12: k ← k mod kmax + 1. Go to Step 2.13: end if14: until Stopping condition is met.

Page 86: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 67

In Step 8 of the algorithm, we allow moves that do not necessarily improve the incumbent

solution, where ρ is a distance function for solutions and λ > 0 a scaling parameter. Note

that we avoid using LS inside VNS because this leads to a time-consuming solution of

many FJS scheduling instances.

Neighborhood structures

First, we introduce the neighborhood structures. Each product Pi, i = 1, ..., r might have

alternative BOMs. They are denoted by Biz (see Chapter 2). The number of subparts

in BOM Biz is denoted by |Biz|. Let Rizjl be the l-th route of the j-th subpart of BOM

Biz. The quantity |Rizjl| denotes the number of operations which have to be performed

according to this route to produce subpart Rizjl. The process plans PP = {PP1, ..., PPr}form the search space on the process planning level. We refer to PP as a global process

plan. Next, we introduce the following parametrized neighborhood structures:

• N1(k) - select randomly k different products Pi1 , ..., Pik . For each of the selected

products Pi with BOM Biz, select randomly a BOM Biz∗ with z 6= z∗, if this is

possible. For the j-th subpart belonging to Biz∗ , select the same route as for the

j-th subpart in Biz. If this is not possible then select a route randomly.

• N2(k) - select randomly k different products Pi1 , ..., Pik . For each of the selected

products Pi with BOM Biz, select randomly s ∼ DU [1, |Biz|] subparts within this

BOM. For each of the selected subparts, select randomly an alternative route.

• N3(k) - select randomly k different products Pi1 , ..., Pik from the set of all products

P1, ..., Pr according to a discrete distribution with probabilities pi :=

∑swisTis∑

j,swjsTjs

where the tardiness values are taken from the best solution already found. Note

that∑j,swjsTjs = 0 is not of interest because we then already know an optimal

solution. For each selected product Pi with BOM Biz, select an alternative

BOM Biz∗ , z 6= z∗, if this is possible, according to a discrete distribution with

probabilities piz∗ := 1− RPTiz∗∑zRPTiz

. Here, RPTiz is an estimate of the RPT required

to produce product Pi according to BOM Biz. The routes are selected as for the

neighborhood structure N1.

The neighborhood structures N1 and N3 explore the solution space by changing the

BOMs of the products, whereas N2 changes only the routes. Hence, it is not guaranteed

that an optimal solution can be reached by a finite number of moves, i.e., that these

neighborhood structures are connected (cf. Subsection 3.3.2). Therefore, we consider

combined neighborhood structures for a given value of k, for example, N2N1 or N2N3,

in the sense of a composition of mappings. The neighborhood structure N3 is similar to

N1. However, it takes into account information about the scheduling process when BOMs

Page 87: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 68

are chosen. When orders Ois of a certain product Pi cause a large weighted tardiness

value then it is likely that changing the BOM and the routes of this product can lead to

an improved process plan. In addition, BOMs are ranked with respect to their RPT. It is

likely that a BOM with a small RPT value causes small waiting times of the operations.

In addition to the basic neighborhood structures N1, N2, N3 and the two combined

neighborhood structures, we consider the combinations N1N3, N3N1, N2N1N3, N2N3N1,

N2N1 •N3 and N2 •N1 •N3 in our computational experiments, where the notation Ns •Nt

means that if we have z ≤ p for a given p ∈ [0, 1] then Nt is chosen, otherwise Ns. Here,

the quantity z is a realisation of a random variable Z uniformly distributed over (0, 1)

i.e., Z ∼ U [0, 1].

Initial process plan

Next, we explain how we tailor the SRVNS scheme for problem (2.4). For computing

an initial solution, an initial process plan PP 0 ={PP 0

1 , ..., PP0r

}is determined either

randomly or the corresponding BOMs are selected according to RPT ranking using

the probabilities piz. The corresponding routes are selected randomly with respect to

a discrete uniform distribution. The resulting FJS scheduling problem is solved using

the iterative LS heuristic described in Chapter 3. In the remainder of this chapter, we

use the notation LS(ω) to indicate that ω iterations are performed by the heuristic. We

abbreviate this heuristic that finds an initial solution by Hinit.

Parallel implementation

We parallelize the shaking step as proposed by Hansen and Mladenovic (2003). This

helps to eliminate the disadvantage of having no LS within the SRVNS approach.

We accept a worse solution compared to the incumbent solution similar to threshold

accepting by selecting ρ(x, x∗) := TWT (x∗)− 1

λTWT (x), i.e., the difference of the TWT

value of the best solution x∗ found so far and the TWT value of the incumbent solution

x multiplied with a scaling parameter used as threshold. The tailored parallel SRVNS

is abbreviated by TPSRVNS. The incumbent process plan is denoted by PP and the

best one found so far by PP ∗. We summarize the TPSRVNS scheme as Algorithm 4.2.

The quantity nexpl used in the shaking step determines the number of elements in the

neighborhood considered in parallel. Parameter β ≥ 1 defines to which extent worse

solutions are accepted. When β = 1 is chosen, then TPSRVNS accepts only better

solutions, while large values of β lead to a situation where a large portion of the new

solutions are accepted. We select the parameter as:

β := 1 + αexp

{1− itermax

itermax − i

}. (4.1)

Page 88: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 69

Algorithm 4.2 TPSRVNS Scheme.

1: Initialization: Select neighborhood structures Nk, k = 1, ..., kmax. Determine aninitial solution using Hinit and the corresponding objective function value TWT 0.Initialize PP ∗ ← PP 0, PP ← PP 0, TWT ∗ ← TWT 0. Set k ← 1.

2: repeat3: Shaking : Generate randomly nexpl process plans PP i ∈ Nk(PP ), such that

PP i 6= PP j , i 6= j. For each of the process plans PP i, i = 1, ..., nexpl, formulateand solve the corresponding FJS scheduling problem. The resulting objectivefunction values are TWT i, i = 1, ..., nexpl. Choose the process plan PP

′which

leads to the smallest value of TWT .4: Improvement or not :5: if TWT

′< TWT ∗ then

6: Set TWT ∗ ← TWT′, PP ∗ ← PP

′.

7: end if8: Move or not :9: if TWT

′< βTWT ∗ then

10: Set PP ← PP′, TWT ← TWT

′and k ← 1.

11: else12: k ← k mod kmax + 1. Go to Step 2.13: end if14: until itermax iterations are reached.

The parameters i and itermax in Equation (4.1) are the current and the maximum

number of iterations, respectively. The parameter α defines the acceptance rate of worse

solutions at the beginning of the algorithm. If i = itermax then we obtain β = 1. The

acceptance rate decreases until itermax iterations are performed. Therefore, it is unlikely

that worse solutions are accepted at the end of the search.

Step 3 of the TPSRVNS scheme requires that each newly selected process plan is assessed

by solving the related FJS scheduling instance. Therefore, we have to solve efficiently

large-scale FJS scheduling problems. We know from Chapter 3 that different heuristics

for large-scale FJS scheduling problems might require a different amount of computing

time. While the SBH-type procedures require a large amount of computing time, the

proposed LS heuristic requires only modest computing times to compute high-quality

solutions. Hence, the LS approach is used for assessing the quality of the process plans

in TPSRVNS.

Scheduling strategies

Although the LS approach is fast, performing a large number of iterations in TPSRVNS

might still lead to computing times that are unpractical. Because of the large search

space of TPSRVNS in case of many products, often a large number of iterations have

to be performed before a high-quality process plan is found. This requires that several

hundreds of FJS problems have to be solved. But there is no need to find high-quality

schedules for low-quality process plans. It is likely that such process plans occur at the

Page 89: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 70

beginning of the search process. To deal with this problem, we introduce the notion of

scheduling strategies. A scheduling strategy consists of rules that help to decide how

well the FJS problem has to be solved at the current stage of the search process. As

soon as TPSRVNS reaches a promising region of the search space, we have to determine

high-quality schedules because otherwise high-quality process plans are not considered.

Our scheduling strategies are based on the LS approach. The LS approach starts from

an initial schedule which has the lowest TWT value provided by list scheduling using

SDR, and the global ATC dispatching rule with appropriate settings of the look-

ahead parameter κ (cf. Monch and Zimmermann 2004). The different strategies are

abbreviated by Si. They are shown in Table 4.1. The notation SDR + LS(20) indicates

that SDR is used to compute an initial solution for the LS.

Table 4.1: Scheduling strategies.

Strategy Scheduling approach

S1 [0–100]: SDR

S2 [0–100]: CS

S3 [0–100]: LS(20)

S4 [0–100]: LS(200)

S5 [0–100]: LS(2000)

S6 [0–100]: LS(20,000)

S7 [0–50): SDR, [50–100]: CS

S8 [0–50): SDR, [50–80): CS, [80–100]: LS(20,000)

S9 [0–50): SDR, [50–80): LS(20), [80–100]: LS(20,000)

S10 [0–80): CS, [80–95): LS(20), [95–100]: LS(20,000)

S11 [0–90): SDR, [90–100): LS(20), [100–100]: LS(20,000)

S12 [0–90): CS, [90–100): LS(20), [100–100]: LS(20,000)

S13 [0–90): SDR + LS(20), [90–100): LS(2000), [100–100]: LS(20,000)

S∗14 [0–90): SDR + LS(20), [90–100): LS(2000), [100–100]: LS(20,000)

We use the expression [l − u) : H to indicate that the scheduling heuristic H is applied

after l and before u percent of the itermax iterations in TPSRVNS. [l − u] : H means

that H is applied for u percent as well. S14 is similar to S13, but in specific iterations

the search is restarted from the best already found process plan PP ∗. Therefore, S14 is

marked with the symbol *.

4.3.2 Memetic Algorithm

Because we are interested in comparing the TPSRVNS scheme with an existing state-of-

the-art algorithm, we extend the GA proposed by Park and Choi (2006). The problem

considered in Park and Choi (2006) is a special case of problem (2.4) because each

Page 90: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 71

product has a unique BOM which consists of a single subpart. GAs combined with LS,

i.e., memetic algorithms, often outperform conventional GAs (cf. Hasan et al. 2009).

Therefore, we propose a memetic algorithm for problem (2.4). It is summarized in

Algorithm 4.3.

Algorithm 4.3 GA for IPPS.

1: Initialization: Generate an initial population of chromosomes Ci, i = 1, ..., nga wherenga is the size of the population. Determine Ci either by using Hinit or randomly.Calculate the corresponding objective function values TWTCi , i = 1, ..., nga. Selecta number of iterations iterga for the GA. Define the crossover pcr and the mutationpmut probability.

2: repeat3: for i from 1 to dnga/2e do4: Selection:

Select parent chromosomes Cx and Cy based on the roulette wheel method.5: Crossover :

With probability pcr, apply the standard two-point crossover for Cx and Cyby introducing a part of Cy into Cx in order to obtain a child chromosomeC∗. Set C∗ ← C

′∗ where TWTC′∗

= minC∈Np

k (C∗)TWTC . Set TWTC∗ ← TWTC′∗

.

The chromosomes are repaired if they become infeasible.6: Swap the roles of the parents to obtain a child chromosome C∗∗.7: Add C∗ and C∗∗ to the population.8: end for9: Mutation:

10: With probability pmut, select a chromosome C from the population.Set C ← C

′ ∈ Npk (C) \ {C}. Set TWTC ← TWTC′ .

11: New population:12: Select nga chromosomes with the smallest TWT values to form a new population.13: until iterga iterations are reached.

For the sake of completeness, we recall the ingredients of the GA proposed by

Park and Choi (2006). A vector whose entries represent the operations of orders is used.

Its length is given by the number of operations assigned to the orders. A gene at position

i is a 3-tuple

gi := (ai, bi, ci),

where ai specifies the order which contains this operation, bi is the selected route for

the order and ci is the machine assigned to the operation. The sequence of the genes

specifies the processing sequence of the corresponding operations, i.e., if ci = cj for two

operations and i < j then the operation represented by position i is processed before

the operation represented by j on the machine encoded by ci. The same holds if ai = aj

and i < j, i.e., if the operations represented by positions i and j belong to the same

order then the operation associated with position j is processed only after the operation

Page 91: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 72

associated with position i, otherwise the technological precedence constraints might be

violated.

A standard two-point crossover is applied in the algorithm (cf. Park and Choi 2006,

Sobeyko and Monch 2015). It can happen that after the crossover the operations of the

same order correspond to different alternative routes, bi 6= bj while ai = aj for indices

i and j. If required, the child chromosome is repaired to maintain its feasibility. Two

mutation operators based on neighborhood search are considered (cf. Park et al. 2003).

The permutation-based neighborhood Npk (c) of a given chromosome c consists of all

chromosomes which can be obtained from c after exchanging the positions of k different

randomly selected genes. The remaining genes maintain their positions. The first

mutation operator considers all chromosomes which can be obtained from Npk (c) \ {c}.

This operator is applied as the general mutation strategy. The second mutation operator

considers all chromosomes which can be obtained from Npk (c). This operator is applied

immediately after the crossover. It is used to determine the chromosome from Npk (c)

which represent the solution with the smallest TWT value.

The roulette wheel method is used as selection operator, while a steady-state GA

with overlapping populations is applied. Although the Cmax objective is considered

by Park and Choi (2006), we apply this algorithm to the TWT objective function.

We improve the obtained schedule by applying the LS approach to each chromosome

immediately before calculating the corresponding objective function value. We refer to

this modification of Algorithm 4.3 as GALS.

4.4 Computational Experiments

4.4.1 Reference Heuristic

We also consider two simple heuristics called RND-PP and RPT-PP which are similar

to Hinit. The heuristics determine nref different process plans. For each of these process

plans, scheduling using the LS approach is performed. The obtained objective values are

averaged. RND-PP randomly selects the process plans, while the process plan selection

in RPT-PP is based on RPT ranking as described in Subsection 4.3.1.

4.4.2 Design of Experiments

We present several sets of problem instances for IPPS which we refer to as Set 1, Set 2,

Set 3, and Set 4 with the main characteristics summarized in Table 4.2.

Page 92: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 73

Table 4.2: Main characteristics of the problem instances for IPPS.

Factor \Level Set 1 Set 2 Set 3 Set 4

Problem size /s/–/m/ /m/,/l/ /s/,/m/,/l/ /l/

Machines 3–12 10–20,20–40 3–9,5–25,36–60 36–60

Machine groups 1-8 5,10 3,5,12 12

Machines pergroup

1-6 ∼ DU [2, 4]

∼ DU [1, 3],

∼ DU [1, 5],

∼ DU [3, 5]

∼ DU [3, 5]

Ready time rangea

0 0 0 {0.0, 0.15, 0.3}

Due date tightnessb

- {0.25, 0.5, 0.75}{0.3, 0.5, 0.7},{0.3, 0.5, 0.7},{0.2, 0.4, 0.6}

0.4

Order weights wis wis ≡ 1 wis =

4, with p = 0.2

2, with p = 0.6

1, with p = 0.2

Number ofproducts

- 5 2, 5, 10 10

BOMs per product 1 1

2,

∼ DU [2, 6],

∼ DU [6, 8]

∼ DU [6, 8]

Subparts perBOM

1 1

∼ DU [2, 4],

∼ DU [4, 6],

∼ DU [10, 12]

∼ DU [10, 12]

Routes persubpart

1-6 ∼ DU [3, 7]

∼ DU [1, 2],

∼ DU [1, 3],

∼ DU [1, 4]

∼ DU [1, 4]

Operations persubpart

2-6∼ DU [18, 22],

∼ DU [80, 120]

∼ DU [2, 4],

∼ DU [6, 8],

∼ DU [8, 12]

∼ DU [8, 12]

Operationprocessing time

- ∼ DU [1, 10]

Orders perproduct

- 3, 5 1–2, 3, 1–4 1-4

Total number oforders

5-9 15, 25 3, 15, 25 25

Nodes in solutiongraph for FJS

- 300, 2500 50, 400, 3000 3000

Number ofinstances

5 120 60,30,30 20

Page 93: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 74

First, we describe the generation process of the BOMs and routes for the products.

We select randomly sets of subparts to generate product Pi. Based on these sets, the

corresponding BOMs Biz with a convergent structure are generated as follows. Each

BOM is specified by a corresponding DAG. We start by selecting randomly a subpart

which becomes the root node of the DAG. We continue selecting randomly a subpart

v which is included in the graph corresponding to the already generated portion of the

BOM Biz. A randomly selected subpart is added to Biz to form a new node u with arc

(u, v). This process continues until all the |Biz| subparts are covered. In the next step,

we generate routes for each subpart. We randomly select a chain of operations. The

same operation can be repeated several times in this chain. Then we randomly generate

the desired number of permutations of these operations to obtain the set of alternative

routes for the subpart.

The ready times ris of the orders are generated according to

ris ∼ DU [0, dapmaxe] , (4.2)

where a ∈ [0, 1] determines how widespread the ready times are, and pmax is an estimate

of the total RPT required to finish all orders of a problem instance. The due dates are

generated according to

dis = ris + bbFFpisc, (4.3)

where pis is the expected RPT of order Ois. The quantity b ∈ (0, 1] influences the due

date tightness, while the parameter FF is the estimated flow factor, i.e., the ratio of

the difference of the completion time and the ready time and the RPT. The weights

of the orders are generated according to the scheme shown in Table 4.2. Each machine

is assigned to exactly one machine group. A single machine group is assigned to each

operation. The number of machines per machine group is again specified in Table 4.2.

We give a general description for each of the considered problem sets. We start by Set 1

taken from the literature.

Set 1

This set consists of five problem instances described in different papers. Three problem

instances for makespan minimization are proposed by Park and Choi (2006). In addition,

we consider two instances for makespan suggested by Lee and DiCesare (1994) and

Lee et al. (2002), respectively. We refer to this set as Set 1. The characteristics of the

Set 1 instances are summarized in Table 4.2.

Next, we generate randomly several sets of problem instances for problem (2.4).

Page 94: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 75

Set 2

The first generated set of instances is structurally similar to the instances from

Park and Choi (2006). Each product has only one BOM which consists of one subpart.

Each subpart might have multiple alternative routes. This instance set is called Set 2.

We solve the instances of Set 2 for both the makespan and the TWT criterion.

Set 3

The next generated set consists of 120 problem instances. It includes 60 small-size

instances which can be solved using the MIP approach, 30 instances with a moderate size,

while the remaining 30 instances are large. We denote these instances by Set 3/small,

Set 3/moderate, and Set 3/large, respectively.

Set 4

To discover the impact of different ready times on the performance of the heuristics, we

generate a set of instances abbreviated by Set 4. This set consists of large-size problem

instances. Two levels of the ready time range are considered.

We abbreviate the small-size instances by /s/, the moderate-size instances by /m/, and

the large-size instances by /l/, respectively.

As in the case of FJS, we use the relative improvements to compare the performance of

different solution methods. This approach is described in detail in Subsection 3.4.2.

4.4.3 Implementation Issues and Parameter Settings

The proposed heuristics are coded in the C++ programming language, while the MIP

model (2.5)–(2.17) is implemented using CPLEX 12.1.0. The GA-type algorithms are

implemented using the GALib framework (cf. Wall 1996). The TPSRVNS is parallelized

based on the threading functionality of the Qt 4.8.5 library. Within each VNS iteration,

nexpl threads are started simultaneously. The computational experiments are conducted

on a PC with a quad core Intel Core i7 3.40 GHz processor and 16 GB of RAM.

For the list scheduling approach based on the global ATC rule, the values of the

parameter κ are selected from the grid 0.01 + 0.25k; k = 0, ..., 20. The global ATC rule

requires estimating the waiting times of the operations. Therefore, flow factor values

FF are determined based on the schedule with the smallest TWT value found for the

dispatching rules of SDR (see Subsection 3.3.1) after selecting the BOMs and routes

randomly.

Page 95: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 76

We perform experiments on a limited set of problem instances to choose the following

parameters based on trial and error. The acceptance rate in expression (4.1) is selected

as α = 0.15. The value of kmax is the number of different products. We apply

Nk := N(k), k = 1, ..., kmax where N is one of the eleven neighborhood structures

described in Subsection 4.3.1. We use 1, 2 and 4 as values for nexpl. We consider 50, 100,

250 and 500 as values for itermax. The settings N2N1, itermax = 500 and nexpl = 1 are

used as default values. When considering N2N1 •N3(k), we set p = 0.5, while p = 1/3 is

used in the case of N2 •N1 •N3(k).

For the GAs described in Subsection 4.3.2, we select the crossover probability as

pcr = 0.8. The mutation probability is pmut = 0.2. The population size and the amount

of iterations for the GAs are defined as nga = 50 and iterga = 200, respectively. We apply

the heuristic LS(500) in each iteration of the GALS and LS(20,000) in its last iteration.

For the neighborhood structure Npk of the memetic algorithm we use the parameter

k = 3 as suggested by Park et al. (2003).

4.4.4 Computational Results

Computational results for Set 1

The goal of the experiments for Set 1 instances is comparing the obtained results with

the ones from the literature. The corresponding Cmax values are shown in Table 4.3.

Table 4.3: Cmax results for Set 1.

Instance Reported GA GALS TPSRVNS

1 360 346 316 310

2 23 24.8 23.2 23.3

3 88 87 87 87

4 62 56.4 56 56

5 61 58 58 58

All the heuristics, including the GA by Park and Choi (2006), find slightly better

solutions than the ones reported in the literature. The GALS outperforms the GA which

is a consequence of applying LS. TPSRVNS provides the same Cmax values as GALS

except for instance 1 where it outperforms GALS. This instance has the largest size

among the Set 1 instances. The computing time required by any of the heuristics is

three seconds for the first instance, two seconds for the second instance and at most

one second for any of the remaining instances. The proposed heuristics are able to find

comparable solutions to the ones reported in the literature. However, the algorithms for

problem (2.4) are not tailored to the Cmax measure. It seems to be possible to replace

the LS approach within the TPSRVNS algorithms by scheduling heuristics designed for

the Cmax objective such as described by Gonzalez et al. (2015).

Page 96: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 77

Computational results for Set 2

The experiments for Set 2 aim to find the best performing heuristics with respect to

solution quality and computing time. Table 4.4 shows the performance of the heuristics

for the Cmax and the TWT objective relative to the results for the RND-PP heuristic.

The RND-PP heuristic is used since only a single BOM per product is available for the

problem instances of Set 2.

Table 4.4: Improvements δ for Set 2.

Criterion/InstancesModerate-size instances Large-size instances

GA GALS TPSRVNS GA GALS TPSRVNS

Cmax 0.03 0.06 0.07 0.05 0.06 0.08

TWT 0.11 0.17 0.26 0.11 0.15 0.23

The GALS and the TPSRVNS perform well for moderate- and large-size problem

instances. However, TPSRVNS slightly outperforms GALS, while GALS leads to smaller

Cmax values than the GA approach. When TWT is considered then the difference in the

performance of the algorithms is considerably larger. Again, the TPSRVNS approach

outperforms the GALS scheme. At the same time, the difference between the results of

GA and GALS is slightly smaller than the difference between TPSRVNS and GALS.

The average computing time per moderate-size instance required by the GA, the GALS

and the TPSRVNS is around 20s, 130s, and 21s, respectively. The computing times

are around 540s, 2050s, and 200s for the large-size instances of Set 2. The larger

computational burden of the GAs is caused by the population to be maintained. The

impact of different order due date ranges on the performance of TPSRVNS on Set 2 is

presented in Table 4.5. The algorithm performs better when the due date range increases.

However, wide due dates might lead to small TWT values which imply larger relative

improvements.

Table 4.5: Performance of TPSRVNS for Set 2 depending on the due date tightness.

InstancesModerate-size instances Large-size instances

Tight Moderate Wide Tight Moderate Wide

TWT 0.08 0.16 0.55 0.06 0.13 0.51

The results in Table 4.4 and Table 4.5 demonstrate that the TPSRVNS approach is

able to find high-quality solutions, especially for the TWT measure. This is caused

by the different ways how the described heuristics work. We point out that the GA-

type approaches compute new process plans and schedules simultaneously. Thus, the

scheduling procedure is less dependent on the selection of the process plans. The

TPSRVNS determines new schedules based on the selected process plans, while in the

GA case the schedules are computed in advance without any preliminary knowledge

Page 97: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 78

which operations have to be scheduled and what the precedence constraints between

the operations are. If the schedule is infeasible due to the crossover, then it has to be

repaired. In the TPSRVNS case, however, the set of operations as well as the precedence

constraints between them is known in advance and thus, this knowledge is exploited in

the scheduling procedure. This leads to high-quality schedules. Hence, the corresponding

process plans can be assessed in a better way. In case of the GA-type algorithms,

the quality of the obtained schedules is often low. Therefore, even appropriate process

plans are not considered anymore. This observation is supported by the fact that the

GALS outperforms the GA, since LS is performed only on the scheduling level. The LS

scheme helps to improve solutions. Based on the results for Set 2, we conclude that

the TPSRVNS approach outperforms the GA-type algorithms. Therefore, the remaining

computational experiments for Sets 3 and 4 are carried out only for TPSRVNS.

Computational results for Set 3

We start by studying the performance of TPSRVNS for the 60 small-size instances of Set

3 when the scheduling strategies from Table 4.1 are applied. Using the MIP approach,

51 instances of Set 3/small are solved to optimality. The average computing time for

these instances is around 50,000s per instance. An average maximum computing time

of 300,000s per instance is used for the remaining nine instances. The average MIP

gap is 56% for these instances. We show the TPSRVNS solution quality compared to

CPLEX expressed as 1 − δ in Figure 4.2 and Figure 4.3. The TPSRVNS scheme is

Figure 4.2: Solution quality of TPSRVNS for Set 3/small and different strategies.

considered with itermax = 250 and itermax = 500, respectively. In both situations,

the neighborhood structures N2N1 for nexpl = 1 are used. The setting itermax = 500

often outperforms itermax = 250. However, the computing times are doubled when

itermax = 500 is used. The performance of the TPSRVNS scheme depends on the

Page 98: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 79

Figure 4.3: Computing times of TPSRVNS for Set 3/small and different strategies.

selected scheduling strategy. The best performing strategy is S6 because LS(20,000)

is able to find high-quality schedules within each iteration. The solutions obtained by

S6 are on average only 1% worse than the optimal solutions found by CPLEX. The

computing time per problem instance is around 80s for itermax = 250, while it is 160s

for itermax = 500. The strategy with the smallest computational effort is S1. It requires

only around 2s for itermax = 250. However, the solution quality is poor since it applies

only the SDR approach for scheduling. We are interested in a trade-off between solution

quality and computing effort. We select the strategy S3 to look at the performance

of the neighborhood structures since it provides quickly high-quality solutions. Note

Figure 4.4: Performance of the neighborhood structures for Set 3/moderate and S3.

that the S11, S12, S13, S14 strategies only slightly outperform the S3 strategy for the

instances of Set 3/small. However, S3 has the advantage that it is purely based on

LS(20). Therefore, the effect of the neighborhood structures is not affected by the chosen

Page 99: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 80

solution technique in the single iterations. The performance of the different neighborhood

structures relative to the solutions obtained by RPT-PP is shown for instances of Set

3/moderate and strategy S3 in Figure 4.4. We see that N2 leads to low-quality solutions.

This is expected because N2 does not consider BOMs, but only the subpart routes.

However, the neighborhood structures containing N2, such as N2N1, N2N3, N2N1 •N3,

and N2 •N1 •N3, perform best.

Selecting scheduling strategies

Next, we determine scheduling strategies for Set 3/moderate that lead to a trade-off

between solution quality and computing time. The results are presented in Figure 4.5

and Figure 4.6. We exclude S6 from Figure 4.5 due to its huge computational burden

Figure 4.5: Performance of different scheduling strategies for Set 3/moderate.

Figure 4.6: Computing times of different scheduling strategies for Set 3/moderate.

Page 100: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 81

(see Figure 4.3). The neighborhood structures N2N1 and N2 • N1 • N3 are considered

for itermax = 250 and itermax = 500, respectively. We obtain that the strategies which

only rely on SDR lead to a poor solution quality, although the computing effort is low.

The best performing strategies require a large amount of computing time. Therefore,

the strategies S4, S10, S12 and S13 are chosen for further consideration. In the next

step, we look for favourable combinations of neighborhood structures and scheduling

strategies. For this purpose, we consider the already preselected neighborhood structures

and scheduling strategies. The computational results obtained for itermax = 250 are

shown in Figure 4.7 and Figure 4.8. The strategies S12 and S13 are the most interesting.

Figure 4.7: Impact of neighborhood structures and scheduling strategies onimprovements (itermax = 250).

Figure 4.8: Impact of neighborhood structures and scheduling strategies on computingtimes (itermax = 250).

S12 outperforms S4 and S10, but requires less computing time than S4 and only slightly

Page 101: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 82

more than S10. Though performing worse, S13 requires significantly less computing time

than the remaining strategies. The best performing neighborhood structures for S12 and

S13 are N2N3 and N2 •N1 •N3. These neighborhood structures and scheduling strategies

are used to solve the instances of Set 3/large for itermax = 500. The corresponding

computational results are shown in Figure 4.9 and in Figure 4.10. The strategy S12

Figure 4.9: Impact of neighborhood structures and scheduling strategies onimprovements for Set 3/large (itermax = 500).

Figure 4.10: Impact of neighborhood structures and scheduling strategies on computingtimes for Set 3/large (itermax = 500).

outperforms S13 with respect to solution quality. However, strategy S13 leads to a trade-

off between solution quality and computing time. The solutions found by applying S13

are on average around 10% worse than those by S12 when compared to the RPT-PP

heuristic. In addition, the results show that N2N3 and N2 • N1 • N3 lead to a similar

solution quality for itermax = 500. However, N2 • N1 • N3 performs slightly better if

itermax is smaller.

Page 102: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 83

Impact of parallelization

We study the impact of parallelization on the TPSRVNS performance for the scheduling

strategy S12 and the neigborhood structure N2 •N1 •N3 relative to RPT-PP. The results

depending on itermax and nexpl are depicted in Figure 4.11 and Figure 4.12.

Figure 4.11: Impact of parallelisation on the improvements by TPSRVNS.

Figure 4.12: Impact of parallelisation on the computing times by TPSRVNS.

We also see in Figure 4.12 that the computing time increases only slightly if nexpl

becomes larger, provided that at least nexpl computing kernels are used. Solutions with

smaller TWT values are obtained by increasing nexpl if the available computing time

is fixed, while increasing nexpl values allows to compute solutions of the same quality

compared to the case of nexpl = 1 with a smaller amount of computing time.

Page 103: Integrated Process Planning and Scheduling in Flexible Job

Chapter 4. Heuristics for integrated process planing and scheduling 84

Table 4.6 depicts the impact of different order due date ranges on the improvements of

TPSRVNS relative to RPT-PP for itermax = 250 and S12. TPSRVNS performs slightly

Table 4.6: Improvements δ depending on the due date setting for Set 3/large.

Due date tightness

Tight b = 0.2 Moderate b = 0.4 Wide b = 0.6

N2N3 0.13 0.36 0.77

N2 • N1 • N3 0.12 0.35 0.77

better on the Set 3 instances than on the Set 2 instances. This is caused by the larger

room for optimization, where the routes as well as the BOMs are involved.

Computational results for Set 4

Finally, we study the impact of ready times on the performance of the TPSRVNS scheme.

For this purpose, we solve the Set 4 instances. The corresponding results for S12 and

itermax = 500 are shown in Table 4.7 for two different neighborhood structures.

Table 4.7: Improvements δ depending on the ready time setting for Set 4.

Ready time range

Tight a = 0.0 Moderate a = 0.15 Wide b = 0.3

N2N3 0.38 0.45 0.48

N2 • N1 • N3 0.39 0.49 0.49

We see that widespread ready times lead to larger improvements. The neighborhood

structure N2 •N1 •N3 often performs slightly better than N2N3. This is especially true

for a = 0.15. These findings are important because we intend to assess the proposed

heuristics in a rolling horizon setting.

The problem instances and computational results presented in this chapter can be found

at http://p2schedgen.fernuni-hagen.de/index.php?id=174.

Page 104: Integrated Process Planning and Scheduling in Flexible Job

Chapter 5

Simulation-based Assessment of

the Integrated Heuristic in a

Dynamic Manufacturing

Environment

Until now, IPPS problems are considered in a static manufacturing environment.

However, the shop floor status can change rapidly over time in real-life applica-

tions. Recent researches such as (Monch and Rose 2004, Monch and Drießel 2005, and

Sourirajan and Uzsoy 2007) suggest that the performance of different heuristic solution

approaches has to be assessed in dynamic and stochastic manufacturing environments.

Such an approach allows to discover the potential of the developed methods in practice.

In this chapter, a simulation framework is presented which is then used to asses the

performance of the heuristics presented in Chapter 3 and Chapter 4. This part of

the work is organized as follows. First, we briefly describe the problem of interest in

Section 5.1 and give an overview of the related work in Section 5.2. Next, we consider the

rolling horizon approach used within our simulation study in Section 5.3. In Section 5.4,

we give an overview of the applied simulation framework. The results of our simulation

study are presented in Section 5.5.

5.1 Problem Statement

In real-life complex manufacturing systems, the following issues can occur:

• The set of customer orders is usually not entirely known in advance in dynamic

manufacturing environments. This is for example the case in the corrugated board

and plastic film industries where some orders can be canceled and new rush orders

85

Page 105: Integrated Process Planning and Scheduling in Flexible Job

Chapter 5. Simulation-based assessment 86

can arrive within a couple of hours. In many cases this can cause the production

schedule to become infeasible, i.e., it cannot be executed. Trying to ”repair” the

schedule using some simple methods might result in a low quality schedule.

• As described by Pinedo (2008), most real-life scheduling problems are NP-hard

(see Garey and Johnson 1979). Hence, trying to solve the problem at once might,

most likely, lead either to unacceptably large computing times or to scheduling

solutions of low quality.

• Real-life manufacturing environments are also characterized by dynamic and often

stochastic effects occurring during the production processes. Most commonly, the

processing times of jobs on the machines can vary within some particular range

and/or follow a certain probability distribution. Schedules not taking this into

account can deteriorate their quality very quickly. Stochastic effects in the form of

machine breakdowns have to be tackled as soon as possible to avoid and/or reduce

the negative impact on the production.

To overcome these issues, appropriate solution methods can be applied. Moreover, it is

desirable to have at least an estimated performance assessment of such solution methods

before implementing them in real manufacturing systems. It is highly desired to have

such estimates over a sufficiently long period of time. This is mainly required due to the

fact that producing goods is often expensive. Therefore, applying improper production

control methods might lead to high (implicit) losses.

Models of manufacturing which do not recognize uncertainty can be expected to result

in inferior process planning and scheduling decisions compared to models that explicitly

account for uncertainty (cf. Mula et al. 2006). This explains why simulation-based

studies within dynamic and stochastic production environments gain a lot of attention.

They allow to assess the performance of the entire production system by creating a

model which reflects the important characteristics of the manufacturing facility, and

investigating its behavior under different experimental conditions. It is also possible to

simulate the behavior of the model over a longer period of time while only a fraction

of the corresponding real time is required. In such a case, the real production is not

impacted. Simulation can be used to perform ”what if” analysis when investigating the

potential impact on the manufacturing process.

In view of the IPPS problem (2.4), we are interested in answering the following questions:

• Does the IPPS have a positive impact on the performance in a dynamic

manufacturing environment?

• How well do the developed solution approaches perform in such environments over

longer periods of time?

• Is it required to apply advanced scheduling to reach an appropriate performance

of the system? Are dispatching rules sufficient for this purpose?

Page 106: Integrated Process Planning and Scheduling in Flexible Job

Chapter 5. Simulation-based assessment 87

We try to answer these questions based on a simulation study in Section 5.5. Note that

due to the complexity of the IPPS problem, the primary goal of this part of the work

is a dynamic simulation environment which does not consider stochastic effects, namely

stochastic processing times and machine breakdowns.

5.2 Related Work

Next, we give an overview of the related work. Weintraub et al. (1999) present a

method to minimize manufacturing cost while satisfying due dates. Alternative process

plans for jobs are considered. The authors develop a simulation-based scheduling

algorithm for lateness minimization. A TS-based algorithm is applied to identify better

process plans. Lee and Kim (2001) propose simulation-based GAs for IPPS. Makespan

and maximum lateness are the considered objective functions. A simulation model

is applied to compute the performance measure values corresponding to the selected

process plans. The resulting values are then used to derive fitness values for the

GA that searches for better process plans. Monch et al. (2003) propose a simulation

framework for performance assessment of shop-floor control systems discussing the

concept and design criteria of such a framework.The basic idea of the concept is to

use a simulation model emulating the shop floor, sending information to the control

system and receiving information back from it. The framework has a modular structure

and the control system contains a data layer encapsulating the current shop floor status

and the control information. Monch (2007) suggests to use a simulation test-bed and a

software architecture that allows for plugging-in of arbitrary production control software.

Different application areas, the advantages, and also the limitations of the suggested

approach are discussed. Sims (1997) suggests using simulation approaches for IPPS, i.e.,

the integrated problem is solved by the means of ”what-if” analysis based on simulation.

Park et al. (2008) apply simulation-based planning and scheduling to anticipate future

schedule changes by simulating different ”what-if” scenarios considering real-time shop

floor data. Zhang et al. (2015) study a simulation-based GA for remanufacturing process

planning and scheduling. In this approach, the solution quality is evaluated using Monte

Carlo simulation, where the production schedule is generated following the specified

process routes.

Sethi and Sorger (1991) develop a framework for the common business practice of rolling

decision making. They consider a stochastic dynamic optimization problem in which

the future uncertainty can be partly resolved at some cost. Time-based decomposition

methods reducing large optimization problems to a number of subproblems of smaller

size are studied by Kreipl and Pinedo (2004). Mason et al. (2004) consider different

rescheduling strategies when minimizing the TWT in dynamic environments with

possible machine breakdowns and introduction of high-priority jobs. Their experiments

reveal that rescheduling of the entire shop floor might lead to significant performance

Page 107: Integrated Process Planning and Scheduling in Flexible Job

Chapter 5. Simulation-based assessment 88

improvements. However, this approach is time-consuming. Wu and Ierapetritou (2007)

apply a rolling horizon strategy for a simultaneous solution of production planning

and scheduling problems under uncertainty. The production for the current stage

of the rolling approach is provided to the scheduling problem. Then the scheduling

problem is solved using a continuous time formulation. Ovacik and Uzsoy (1995) present

a family of rolling horizon heuristics for minimizing maximum lateness on parallel

identical machines in the presence of sequence dependent setup times and dynamic

job arrivals. This work shows that such methods significantly outperform dispatching

rules combined with LS. Their performance advantage is particularly notable in

the high-congested manufacturing environments. Monch and Drießel (2005) perform a

simulation study of a hierarchical approach to minimize TWT in complex job shops.

Monch et al. (2007) conduct simulation experiments in a dynamic job shop to assess the

performance of a modified SBH. It turns out that using near to optimal subproblem

solution procedures leads to improved results compared to dispatching-based methods.

Drießel and Monch (2012) use a rolling horizon approach for integrated scheduling and

material handling in complex job shops. Simulation experiments in a dynamic job

shop environment are conducted. The suggested solution approach outperforms common

dispatching rules in many situations.

Wong et al. (2006) propose an agent-based negotiation model for IPPS in a job shop.

The job agents and the machine agents negotiate in order to select a process plan

and to compute the corresponding schedule. This negotiation process is based on the

information from a representation of the problem in form of an AND/OR graph. The

proposed approach is assessed using a simulation model. Leung et al. (2010) present an

ACO algorithm for IPPS. This algorithm is incorporated into a MAS where individual

ants are implemented in form of software agents. Cmax is considered as the optimization

criterion. A simulation study is performed to assess the performance of the proposed

method. Phanden et al. (2013) describe a GA for Cmax and mean tardiness. The

proposed system consists of several modules, among them a module for process plan

selection, for scheduling, and for process plan modification. Simulation is used to

determine the performance of the schedules. The simulation models are generated

using information on the current shop floor status. Petronijevic et al. (2016) develop

an agent-based methodology for makespan optimization and perform a simulation study

of the proposed approach in an environment with dynamic job arrivals and machine

breakdowns. The authors conclude that the system’s adaptability increases due to

changing of process plans and schedules when properly reacting on different disturbances.

The amount of simulation studies for the IPPS problem in an FJS manufacturing

environment seems to be rather limited. To the best of our knowledge, there are no

such studies that rely on a rolling decision making approach and apply the TWT as

performance measure. Thus, we investigate the problem (2.4) in a simulation study

applying a rolling horizon setting.

Page 108: Integrated Process Planning and Scheduling in Flexible Job

Chapter 5. Simulation-based assessment 89

5.3 Rolling Horizon Setting

A rolling horizon approach is commonly used to make decisions in dynamic and

stochastic environments (cf. Sethi and Sorger 1991). It is a time-based decomposition

method which is primarily used to tackle the issues stated in Section 5.1. The rolling

horizon approach involves making the most immediate decisions for a certain time

horizon in the future based on the current state of the system and on input data available

within the considered horizon up to a certain point of time (cf. Pinedo 2008). Since this

method can result in optimization problems of a smaller size, it becomes attractive

for many industrial planning and scheduling problems (cf. Li and Ierapetritou 2010). In

real life, customer demand becomes less certain over time. Customer orders are usually

known or can be predicted with a high level of certainty only for a limited period of

time in the future. Thus, the rolling horizon approach can be applied to make process

planning and scheduling for this period. The following periods can be considered later

on when more new information about the orders is available.

We describe a rolling horizon approach for the simulation-based assessment of the

heuristics presented in Chapter 3 and Chapter 4 to solve the IPPS problem. Since it

is a time-based decomposition scheme, we consider a set of time points

tk = k · t4, (5.1)

where k = 0, 1, ... By t4 we denote the time interval between two subsequent process

planning and scheduling cycles. At each point of time tk, we solve the IPPS problem for

all orders which arrive up to a certain point of time in the future and taking into account

the state of the base system. This point of time is defined as tk + th, where th represents

a planning horizon. Orders Ois with ris ≥ tk + th are not considered. We define th as

th = t4 + tah, (5.2)

where tah is the additional planning horizon. We summarize the basic rolling horizon

approach in Algorithm 5.1

Algorithm 5.1 Rolling Horizon Approach

1: Initialization: Set k ← 0, tk ← 0, th ← t4 + tah.2: repeat3: Consider the current state of the production system.4: Define orders Ois such that ris < tk + th.5: Solve the IPPS problem using the input data from Step 3 and Step 4.6: Set tk+1 ← tk + t4.7: Try to implement the obtained schedule for the period [tk, tk+1].8: Set k ← k + 1.9: until Stopping condition is met.

Page 109: Integrated Process Planning and Scheduling in Flexible Job

Chapter 5. Simulation-based assessment 90

According to Algorithm 5.1, the planning horizon ”rolls” over the periods [tk, tk+1],

k = 0, 1, .... If the additional horizon tah = 0 then the planning periods do not overlap.

In case tah > 0, a time decomposition with overlapping is considered. This time-based

decomposition scheme is depicted in Figure 5.1.

t

0 t 1 t 2 t 3

Process Planning & Scheduling

t∆

+tah +tah +tah

t∆ t∆

Base System

First periodSecond period

Third period

Figure 5.1: Rolling horizon time decomposition for IPPS.

We consider a dynamic manufacturing environment with order arrivals over time. We

point out that the process planning and scheduling procedure can also be triggered when

one or more new orders arrive.

5.4 Simulation Framework

For a simulation-based assessment of the integrated heuristic we use a simulation

framework based to a large extent on the one proposed by Monch et al. (2003) and

summarized in Figure 5.2.

Process Planning & SchedulingEncapsulates the logic for IPPS

Server implementation able to run on any node in the network

Production control level

Execution level

Data Layer with Data ModelData structures and rules describing the execution level

Client implementation in form of a DLL

SimulatorSimulation software AutoSched AP 9.3

Model(s) based on the static problem instances for IPPS

Callback

TCP/IPXML-data

Figure 5.2: Simulation framework.

Page 110: Integrated Process Planning and Scheduling in Flexible Job

Chapter 5. Simulation-based assessment 91

The simulation framework consists of two main parts:

1. A production control system which includes process planning and scheduling, and

is responsible for computing schedules that have to be executed on the shop floor.

2. A simulation software which mimics the behavior of the shop floor including

dynamic order arrivals.

Next, we give a more detailed description for each part of the framework.

5.4.1 Simulation Module

We start by describing the simulation environment. The purpose of the simulation

module is to emulate the behavior of a real manufacturing environment and to enable the

performance assessment of the control software. The simulation model has to describe

the important characteristics of the corresponding base system and base process, such

as:

• the manufacturing capacities, the availability of machines;

• possible ways of manufacturing the products (BOMs, routings), possible operation-

to-machine assignment rules, processing times on the machines;

• the release schedule of the orders, i.e., how they arrive over time.

The simulator is able to receive information representing the most recent decisions

from the control system. The machines have to follow the computed schedules during

the simulation process. The current state of the base system has to be considered

each time the control system creates a new schedule. For this purpose, the simulation

software is able to communicate the shop floor status to the controller. This information

includes the status of the orders at each machine as well as the information on the

orders/subparts/operations which are already completed. If a machine is busy at some

moment of time then an estimate is required when this machine will become available

again. In fact, when using a data layer, also know as a blackboard (see below), the status

changes of the base system can be communicated to the control system immediately after

they occur.

Another requirement on the simulator is its ability to collect statistics such as the real

completion times of the orders, or the machine loads. Based on the collected data, it is

possible to assess the overall performance of the modeled manufacturing facility, and of

the control module with the applied process planning and scheduling algorithms.

In some papers, simulation is used for decision making, thus being a part of the control

system (cf. Phanden et al. 2013, Zhang et al. 2015). We point out that in our study, the

developed heuristics are applied during the simulation process, i.e., the simulation relies

on the process plans and schedules provided by the control module.

Page 111: Integrated Process Planning and Scheduling in Flexible Job

Chapter 5. Simulation-based assessment 92

5.4.2 Control Module

The purpose of the control module is to support the process of making decisions in the

production environment. It consists of two parts: a data layer and a planning module.

Data layer

The data layer, sometimes also called a backboard (cf. Monch et al. 2003), is a part of

the control module which represents an interface between the simulation software, or

the real base system, and the control module. It contains data structures and rules used

in the production system. The data layer reflects the current state of the base system at

any point of time. For example, it describes the customer orders which already arrived.

It also contains the execution status of each operation and tracks the progress on each

order and the subparts required to produce this order. It is also possible to reflect the

status of the machines in case they start/finish processing of the operations.

Capturing the current state of the manufacturing system in real time requires a fast

communication mechanism between the data layer and the base system itself, i.e., the

simulation software. Therefore, we use the approach of callback functions: as soon as

there is any change on the execution level, this information is immediately fed back to

the data layer.

The business rules within the data layer determine the behavior of the base system

in different situations. An example is the selection of jobs at the machines. Usually,

there are several orders waiting for their processing in front of the machines. It has to

be decided which order to select for processing next. This can be done based on the

sequence of the orders in the computed schedule. We point out that the data layer also

contains objects that describe the schedule for execution.

In our simulation model, orders can arrive dynamically. The new order arrivals have

to be taken into account to avoid inferior performance of the system. Therefore, the

data layer contains rules for triggering the process planning and scheduling module in

view of the new circumstances. In case of the time-based decomposition described in

Section 5.3, process planning and scheduling is triggered at certain predefined points of

time tk. Note that it is also possible to trigger the process planning and scheduling cycle

only after a certain number of new orders arrive in the system.

Process planning and scheduling module

This module has to be able to compute high-quality production schedules based on the

current status of the base system. Depending on the industry, not only the quality of

the solutions but also the efficiency of the algorithms implemented within this module

Page 112: Integrated Process Planning and Scheduling in Flexible Job

Chapter 5. Simulation-based assessment 93

can be important. The required computing time to find an appropriate solution has to

be comparable to the time duration which an operation or a subpart spends on the

machines. If the process planning and scheduling cycle requires too much time then the

production can be impacted and many local decisions have to be taken ”ad hoc” while

there is no recent schedule available. The latter can cause inferior performance of the

manufacturing system and can have a negative impact on future decisions. Therefore, the

process planning and scheduling module has to be able to compute schedules efficiently

and feed them back to the base system as soon as possible. In dynamic manufacturing

environments, the efficiency of the production control module is expected to have a

positive impact on the quality of the generated schedules as well. This is related to

the possible state changes of the base system. When a new schedule is computed, the

production control module captures the most recent state of the system. The less time

the solution process requires to find a high-quality schedule, the more unlikely is that

the result becomes outdated.

The process planning and scheduling module depicted in Figure 5.2 is designed to

implement the decomposition approach for the IPPS problem as described in Chapter 4.

An overview of the applied solution architecture is given in Subsection 5.4.3. This

architecture allows the planning and scheduling algorithms to be replaced without a

need to stop the system. Hence, this makes the framework of the process planning and

scheduling module more generic.

The process planning and scheduling module is able to receive the current state of

the base system as well as some input parameters impacting the solution process. The

computed process plan and the corresponding schedule are immediately fed back to the

data layer.

Interaction of the data layer with the process planning and scheduling module

An important aspect is the communication between the modules within the production

control system. As depicted in Figure 5.2, the planning module and the data layer are

able to exchange information in both directions. Unlike the communication between the

simulator and the data layer, the communication between the planner and the data layer

is implemented based on the networking technology TCP/IP (cf. Fall and Stevens 2011).

Information in form of XML messages (see Bray et al. 1997) flows between the two

modules. We implement a client-server architecture for the communication process.

Note that this architecture is suitable for multi-agent implementations as described

by Ivanovic et al. (2017).

The process planning and scheduling module is implemented in form of a server

application. It permanently listens for new incoming messages and as soon as new data

arrive, they are parsed and the IPPS problem is solved. The computed schedule is

converted into the XML format and sent back to the data layer.

Page 113: Integrated Process Planning and Scheduling in Flexible Job

Chapter 5. Simulation-based assessment 94

The data layer contains a client module which exchanges messages with the process

planning and scheduling module. As soon as it is decided to trigger the process planning

and scheduling cycle, the relevant data are collected and converted into an XML message.

Next, the client module tries to establish connection with the process planning and

scheduling server and transmits the message. We point out that the communication

between the client and the server is synchronous, i.e., the connection is kept open and

the client waits until the the server sends back a new schedule. As soon as this happens,

the connection closes, the received XML message is parsed and the schedule becomes

available to the base system.

Such network-based decoupled architecture offers the following benefits:

1. The planning module and the data layer can be maintained independently.

2. It is possible to implement the planning module and the data layer even using

different technologies.

3. It is easy to implement alternative process planning and scheduling approaches

without the shop floor ever notice this on the technical level.

4. The planning module and the data layer can be executed on different nodes in the

network, thus enabling the common practical approach of using dedicated servers

for planning activities.

Next, we present the software architecture of the described module for IPPS.

5.4.3 Architecture of the Process Planning and Scheduling Module

The decomposition approach for solving the IPPS problem presented in Section 4.2 has

the advantage that there is no tight coupling between the process planning and the

scheduling modules. This allows to develop appropriate algorithms for process planning

and for production scheduling independent from each other in a plug-in-like manner.

The idea of this approach is to

• make the implemented solution algorithms easily interchangeable as well as

exchangeable also during the run time;

• significantly improve maintainability by logical and technical decoupling of the

implemented algorithms while using common data structures for information

exchange;

• allow easy creation of new process planning and scheduling modules and use them

as building blocks in the final IPPS solution.

Page 114: Integrated Process Planning and Scheduling in Flexible Job

Chapter 5. Simulation-based assessment 95

SolverGeneral interfacefor all algorithms

Process planning

Planner

VNS / GA / SA etc.

Scheduling

SchedulerData

exchangeVNS / LS / SBH etc.

Figure 5.3: Solution architecture of the decomposed IPPS approach.

To achieve these goals, we develop a solution architecture depicted in Figure 5.3.

Based on the object-oriented implementation approach, we use the class Solver as an

interface describing the generic functionality any object encapsulating process planning

or scheduling algorithms has to provide. We distinguish between solvers which are

used for process planning and inherit the Solver’s child class Planner, and the

ones used for scheduling purposes and inherit the Solver’s child class Scheduler. A

significant advantage of generic interface classes is that the (server) application (see

Subsection 5.4.2) developed for IPPS does not have to be aware of the algorithms which it

tries to execute, except how to call them and what the input/output data are. Although,

the architecture depicted in Figure 5.3 is designed for the decomposed process planning

and scheduling, it does not limit the production control module regarding the solution

approach for the integrated problem. From the technical point of view, it is possible

to implement a module tackling process planning and scheduling simultaneously, for

example by applying the memetic algorithm described in Subsection 4.3.2. The same

generic interface class Solver can be used.

Each algorithm or group of algorithms exists in form of a DLL which provides convenient

interface functions for triggering the relevant solution method implementation. Such an

approach makes the process planning and scheduling algorithms interchangeable. Thus,

new planning approaches can be implemented according to the ”plug & play” ideology.

Since the process planning and scheduling functionality exists in form of DLLs, it can be

uploaded on demand by the (server) application. It is not required to stop the application

if new plug-ins have to be installed or the existing ones have to be updated.

The classes Planner and Scheduler rely on the data structures which are commonly

used to exchange data between the objects. These data structures are implemented in

a core DLL which is the only library required to be used during the run time for the

newly developed modules.

Page 115: Integrated Process Planning and Scheduling in Flexible Job

Chapter 5. Simulation-based assessment 96

Next, we present computational results for the simulation-based performance assessment

of the integrated heuristic within the described simulation framework using the rolling

horizon setting.

5.5 Computational Experiments

5.5.1 Design of Simulation Experiments

Simulation software

There is a variety of different software products which can be used to simulate real-

life shop floors. Among others, we select AutoSched AP 9.3 which is also considered in

(Monch et al. 2003). It is commonly used to model and simulate production processes

in semiconductor manufacturing facilities. We adapt it for the considered integrated

problem.

Approach to generate simulation models

Simulation models for assessing the performance of the integrated heuristic can be

generated based on the static problem instances described in Chapter 4. These problem

instances already provide such data as

• the description of manufacturing resources;

• the structure of products, i.e., BOMs and routes;

• the structure of demand, i.e., the customer orders.

For the corresponding simulation model, we specify in addition the following information:

• a simulation horizon Tsim;

• a number of repeats for each order nrep;

• a deterministic order interarrival time tia.

This information is added to the model when we convert a static problem instance

into the AutoSched AP’s input data format. The parameter settings are described in

Subsection 5.5.2.

Preserving the selected BOMs and routes

In practical applications, it is important that the selected process plan does not

significantly change within at least a certain time horizon. This requirement is to a large

extent explained by the preliminary work preparation process in the plants. Different

Page 116: Integrated Process Planning and Scheduling in Flexible Job

Chapter 5. Simulation-based assessment 97

materials used during the manufacturing process, such as inks or chemicals, as well as

different tools, such as printing stereos or dies, have to be prepared and/or delivered

from the stock before the actual manufacturing begins. Hence, we assume that once a

process plan is selected for a certain order, it does not change anymore. For the sake

of completeness, we also consider scenarios where BOMs and routes can change during

every process planning and scheduling cycle.

Initialization issues for the IPPS module

Each time the IPPS procedure is triggered, the state of the base system is communicated

to the control module. It is possible that some machines are busy at this moment. At

the same time, the subparts of certain orders might be processed as well. We point out

that not all subparts start or finish their processing simultaneously. Certain subparts of

the orders can be already finished, while some other subparts can be in progress, or still

waiting for processing. Therefore, we define the following rules to initialize the process

planning and scheduling module.

• If no subparts of the order started their processing then the order can undergo

process planning and scheduling as depicted in Figure 5.4, provided the BOM and

routes of this order are allowed to be changed. Note that this is always the case

for the new orders which are not planned yet.

1 2

7 3

5 21 2

3 4 7

3 2 2

Figure 5.4: The change of the entire process plan is allowed.

• If at least one subpart of the order started processing and it is allowed to change

the subpart routes then the remaining subparts can receive new alternative routes

as depicted in Figure 5.5. Furthermore, the order can be (re)scheduled.

1 2

7 3

5 21 2

7 3

2 5

Figure 5.5: Only the change of subpart routing is allowed.

• If the machine performs a task then it is estimated when this machine will become

available. This case is depicted in Figure 5.6. The expected availability time t is

used during the process planning and scheduling cycle.

Page 117: Integrated Process Planning and Scheduling in Flexible Job

Chapter 5. Simulation-based assessment 98

t0 t 1 t 2 t 3 t 4 t 5 t 6

Operation

M :

t~

Figure 5.6: Estimated time t when the machine M will become available.

Simulation scenarios

We consider different simulation scenarios to answer the questions stated in Section 5.1.

Under a simulation scenario we understand a set of parameters which are expected to

impact the simulation results. Investigating the influence of these parameters and/or

their combinations is of interest in our research. We denote by Zfjsi simulation scenarios

for FJS scheduling, whereas for IPPS the notion Zippsi is used.

First, we investigate the simulation scenarios related to FJS scheduling. Our aim

is to discover the behavior of the proposed scheduling approaches and to find out

which simulation parameters are appropriate to perform simulation for the integrated

problem. We summarize the main characteristics of the considered simulation scenarios

for FJS scheduling in Table 5.1. The values for t4 and the other simulation-related

Table 5.1: Simulation scenarios for FJS.

Simulation Scenarios

Parameters Zfjs1 Zfjs2 Zfjs3

Scheduling method FIFO SDR LS(200k)

Consider tah no no no

parameters are defined in Subsection 5.5.2. We recall that the scheduling cycle is

performed periodically every t4 time units. The process plans for these simulation

scenarios are generated using the simple heuristic RPT-PP described in Subsection 4.4.1.

Scenarios Zfjs1 and Zfjs2 use FIFO and SDR to compute schedules, respectively. Scenario

Zfjs3 relies on the heuristic LS(200k) described in Chapter 3. We point out that the

considered simulation scenarios can be used as a reference when investigating the benefits

of applying the integrated heuristic. Next, we introduce simulation scenarios Zippsi for

Table 5.2: Simulation scenarios for IPPS.

Simulation Scenarios

Parameters Zipps1 Zipps2 Zipps3

Scheduling strategy S12+ S12+ S12+

Consider tah no no yes

Preserve BOMs/routes yes no yes

Page 118: Integrated Process Planning and Scheduling in Flexible Job

Chapter 5. Simulation-based assessment 99

the integrated problem. Their parameters are summarized in Table 5.2. These scenarios

are selected based on preliminary experimentation with smaller simulation models. The

IPPS cycle is performed periodically every t4 time units. Scenario Zipps3 considers an

additional planning horizon tah. The parameters S12+, t4, tah, and others are described

in Subsection 5.5.2. Scenario Zipps2 is used to model a situation where BOMs and routes

can change.

5.5.2 Implementation Issues and Parameter Settings

All algorithms as well as the complete functionality are developed using the C++

programming language and follow the paradigm of object-oriented programming. The

simulation experiments are performed on a computer with an Intel Core i7-3630QM

processor and 20GB of main memory operated by Windows 7 Professional system.

The data layer is implemented as an extension module for AutoSched AP in form of

a DLL library. It communicates with the simulation software by the means of callback

functions. The client used for communication with the planning server is implemented

in form of a DLL as well. It conveniently provides interface functions which can be

called by the data layer to trigger the IPPS process. The client is based on the TCP/IP

technology available in the Qt 4.8.5 software library for C++ development.

The planner module is implemented as a standalone TCP/IP server application. The

implementation of the planning algorithms follows the architectural design described

in Subsection 5.4.3. The TCP/IP server technology is provided by the software

development library Qt 5.6. All DLLs encapsulating the heuristics are compiled using

this setup as well. We point out that the planning module is developed in a cross-platform

manner making its porting to any other modern platform such as Linux, Mac OS, BSD

and, possibly, others easy.

We consider a simulation model which has in total five products and three orders per

product at the beginning of the simulation. Thus, the average amount of production

steps at any moment of simulation is around 500. There are nrep = 50 orders considered

for each product during the simulation. In total, the amount of orders reaches 750.

Based on trial and error, we select the simulation horizon sufficient to guarantee that all

orders are finished during the simulation. We choose Tsim = 15 simulated days assuming

that one model time unit equals to 1 minute of the real time. The simulation does

not use any shift calendars, i.e., the machines are available and operable at any time.

The model contains fifteen machines divided into five machine groups: two groups each

containing four machines, two groups each containing three machines, and one machine

group consisting of one machine.

At the beginning of the simulation, each order Ois has a release time ris selected

according to the discrete distribution ris ∼ DU [0, 10, 20, 30] time units.

Page 119: Integrated Process Planning and Scheduling in Flexible Job

Chapter 5. Simulation-based assessment 100

The interarrival times of the orders are constant and selected as tia = 180 time units.

This parameter is chosen using trial and error approach to avoid significant resource

overloads during the simulation in the considered model and to keep the flow factor

between approximately 1.5 and 2.

Following the computational results presented in Section 4.4, we consider the neighbor-

hood structure N2 •N1 •N3 on the process planning level. The relevant parameters can

be found in Subsection 4.4.3. We select itermax = 100 and nexpl = 4 for the search

algorithm. Our preliminary experiments reveal that the described rolling approach

is sensitive to the quality of the computed schedules. Therefore, we use a modified

scheduling strategy S12+ obtained from S12 (see Subsection 4.3.1) by increasing the

amount of LS-iterations:

S12+ = [0–90): LS(10k), [90–100): LS(20k), [100–100]: LS(200k).

When a machine out of the considered machine group becomes available, the next order

scheduled on this machine is selected for processing. Note that each operation for the

considered machine is scheduled at a certain point of time. We sort such operations

ascending based on their start times and consider the resulting sequence. The orders are

processed according to this sequence.

In the rolling approach, the interval t4 equals to 120 time units. The additional horizon

duration tah is selected as tah = 60 time units.

For each simulation scenario, we perform five independent simulation runs since the

integrated heuristics have a stochastic nature. Note that in order to diminish this effect

of randomness in practical applications, the IPPS cycle for the considered period could be

repeated several times with the same input data, but different random number generator

seeds. The schedule with the best objective function value could be selected for execution.

After each simulation run, we consider the following performance measures:

• TWT - the total weighted tardiness of all orders;

• TT - the total tardiness of all orders;

• Avg. T - the average tardiness per order;

• Avg. T* - the average tardiness per tardy order;

• Avg. CT - the average cycle time per order;

• OT (%) - the percentage of on-time orders.

We calculate the cycle time of an order Ois as CTis = Cis − ris. Note that TWT is the

optimization criterion of interest. The results are averaged over the performed simulation

runs.

Page 120: Integrated Process Planning and Scheduling in Flexible Job

Chapter 5. Simulation-based assessment 101

5.5.3 Computational Results

Simulation results for FJS

We start by presenting the results for simulation scenarios from Table 5.1 related to

FJS scheduling. We summarize these results in Table 5.4 in form of absolute values for

different performance measures described in Subsection 5.5.2. We recall that TWT is

the optimization criterion of interest.

Table 5.4: Simulation results for FJS scheduling.

Scenario TWT TT Avg. T Avg. T* Avg. CT OT (%)

Zfjs1 135452 60631 81 86 248 6

Zfjs2 11743 5389 7 17 170 58

Zfjs3 8936 8401 11 52 168 78

Table 5.4 indicates that the scheduling approaches based on FIFO and SDR still leave

a lot of room for improvement. For instance, scenario Zfjs1 is outperformed by the other

scenarios and does not appear to be a suitable option in case the solution quality is

important. Scenario Zfjs2 appears to perform well, especially with regard to TT, Avg.

T, and Avg. T*. However, scenario Zfjs3 is a much better choice as long as TWT is

considered. Besides the schedules with the smallest TWT values, Zfjs3 also provides the

smallest tardiness levels per (tardy) order as well as the highest level of on-time orders.

But such a solution quality improvement is more costly in terms of computing time. The

amount of time required to perform one simulation run using Zfjs2 is approximately 15

minutes, whereas one run of Zfjs3 requires around 1.5 hours.

Since Zfjs3 appears to deliver the best result in terms of TWT, we use it as a reference

when investigating the performance of the IPPS heuristic.

Simulation results for IPPS

Next, we investigate the benefits of applying the integrated heuristic compared to

the FJS scheduling approaches with simple selection of process plans. We present the

computational results for the simulation scenarios Zippsi summarized in Table 5.5.

Table 5.5: Improvements δ relative to Zfjs3 for IPPS.

Scenario TWT TT Avg. T Avg. T* Avg. CT OT (%)

Zipps1 0.45 0.48 0.48 0.40 0.03 0.03

Zipps2 0.52 0.51 0.51 0.43 0.03 0.04

Zipps3 0.30 0.30 0.30 0.30 -0.01 -0.01

Page 121: Integrated Process Planning and Scheduling in Flexible Job

Chapter 5. Simulation-based assessment 102

We use Zfjs3 as a reference scenario and calculate the relative improvements δ as

described in Subsection 3.4.2.

The results in Table 5.5 indicate that applying the integrated heuristic that relies on the

advanced search on the process planning level leads to significant performance increase

in a dynamic manufacturing environment with tardiness-based performance measures

such as TWT. Using scenario Zipps1 , the TWT values can be improved by around 45%.

The improvement of the TT is even larger and reaches the level of around 48%. Allowing

the heuristic to change BOMs and routes during every IPPS cycle, i.e., using scenario

Zipps2 brings additional benefits in terms of TWT, TT, Avg. T, and Avg. T*. At the

same time there is also a minor increase of the amount of on-time orders.

Surprisingly, extending the length of the planning horizon by tah = 60 time units leads to

a noticeable performance deterioration for Zipps3 when compared to Zipps1 . This behavior

can be explained by the different states of the orders. In the dynamic manufacturing

environment, there are orders which are already started and cannot change their BOMs

and routes, and there are new orders which do not have a process plan, yet. The

integrated heuristic tries to select best possible process plans for the orders. However, the

process plan assessment is performed based on the scheduling results of the corresponding

FJS problem. During this process, not only the new orders are taken into account but

the old ones as well. Apparently, the old orders can have a significant impact on the

resulting TWT. At the same time, the increased planning horizon leads to a larger

amount of operations which have to be scheduled at once. This can also decrease the

scheduling quality. As a result, high-quality process plans for the new orders can be

rejected due to the impact of the old orders. One possible remedy might be to separate

the old and the new orders by scheduling the former ones upfront. Afterwards, the IPPS

heuristic might be applied to the new orders where the FJS scheduling procedure appends

them to the previously generated schedule for the old orders. Another possibility might

be to consider only the operations within the planning horizon and to ignore the ones

exceeding the horizon’s limits. Such approaches increase the implementation complexity

of the production control system and are not considered in this work.

Because of the IPPS procedure in the scenarios Zippsi , the required computing time for

one simulation run is larger than in the case of the FJS-based scenarios. One simulation

run requires around seven hours, whereas one planning cycle of the integrated heuristic

has a duration between two and three minutes.

Page 122: Integrated Process Planning and Scheduling in Flexible Job

Chapter 6

Implementation Aspects in Form

of a Multi-Agent System

A lot of manufacturing companies implement centralized production control systems.

However, the new challenges that the companies face on the market as well as recent

advances of the information system technologies suggest that a new paradigm has to be

used for production in the future manufacturing systems. In this chapter, we give an

overview of some approaches in the context of Industry 4.0. We are mainly interested in

describing how the algorithms developed in the previous chapters can be incorporated

into an agent-based system. Such an implementation is motivated in Section 6.1. First,

we consider the main market challenges for the companies in Subsection 6.1.1 and

discuss the related work in Subsection 6.1.2. Then we give an overview about HMS

and MAS, relation between them, and the PROSA reference architecture in Subsections

6.1.3 - 6.1.6. In Section 6.2, we present a set of software agents which can be used

to integrate the process planning and scheduling algorithms into a MAS. Different

properties of the agents are described as well as their behavior, and the communication

between the agents. Finally, we briefly describe a MAS framework which can be used

for implementation in Subsection 6.2.8.

6.1 Motivation for an Agent-based Approach in Process

Planning and Scheduling

6.1.1 Challenges

Future manufacturing systems require a new generation of information technology and

a new paradigm of production. Nowadays, the market is able to undergo different

disturbances, customer demand variations etc. The companies have to react on such

103

Page 123: Integrated Process Planning and Scheduling in Flexible Job

Chapter 6. Implementation aspects in form of a multi-agent system 104

circumstances fast. A manufacturing company which tends to be successful on the market

has to face a lot of challenges:

• permanent introduction of new more complex products;

• shortening the production cycle;

• reduction of time-to-market;

• dynamic reaction on demand changes;

• reduction of investment;

• individualization of products;

• flexibility in production;

• decentralization of decision making;

• further increase of mechanization and automation;

• digitalization and networking in the manufacturing environments.

Many of these challenges were already described over a decade ago, for example by

McFarlane and Bussmann (2000) and are still valid, whereas Lasi et al. (2014) describe

some new challenges which arise in view of the recent advances in the information

systems. One of the most promising manufacturing paradigms aiming to help the

companies face these requirements is holonic manufacturing (cf. Deen 2003). Within

this paradigm, production control systems can be implemented in form of MAS. In this

chapter, we present our vision of how the described IPPS heuristics can be implemented

in an agent-based manner. Note the the actual implementation is beyond the scope of

this research.

6.1.2 Related Work

Leung et al. (2010) propose an agent-based ACO for integrating process planning

and job shop scheduling. The search-based algorithm is incorporated into a MAS

platform: every ant is implemented as a software agent. Such an approach allows for a

computational distribution of the algorithm. Petronijevic et al. (2016) propose a multi-

agent methodology for IPPS. They consider part/job agents that represent single parts

which have to be manufactured; machine agents which share the information about

their state and the time required to perform different operations; an optimization agent

communicates with the machine agents and the job agents by informing them which

job has to be processed on which machine an at which time point. It coordinates

communication between the other agents. The optimization is based on the best

processing times communicated by the machine agents during the bidding process. It can

perform optimization as soon as new orders arrive. The agents are able to adapt to the

new circumstances occurring during the manufacturing process. Only linear process plans

are considered. The smallest processing time is used as a criterion for process selection.

Page 124: Integrated Process Planning and Scheduling in Flexible Job

Chapter 6. Implementation aspects in form of a multi-agent system 105

The operations can be processed on alternative machines. Van Brussel et al. (1998)

propose a general reference architecture for modeling MAS in production environments.

The main types of agents are described:

• product,

• order, and

• resource agents.

Those are the decision making agents. An additional type of staff agents is foreseen to

support the decision making process. Monch and Stehli (2006) propose the framework

ManufAg for implementation of MAS for complex manufacturing. Different types of

agent hierarchies are discussed. As an example the authors introduce FABMAS, a hierar-

chically organized MAS for production control in semiconductor manufacturing facilities.

Marin et al. (2013) propose a conceptual architecture based on intelligent services for

manufacturing support systems. This approach focuses on the need to support real-time

automated negotiation, planning and scheduling, and optimization within and across

factories. A service-oriented MAS approach is considered where software agents can

accomplish their tasks by using services offered by other agents. They can also offer

their services to other agents. An intelligent enterprise service bus is used to support the

interoperability among all these intelligent services provided by different tools. Small

lot sizes in the sense of product individualization is one of the key properties of the

considered production systems. Lasi et al. (2014) describe the fundamental concepts

of Industry 4.0. The future manufacturing systems can be seen as smart factories

consisting of completely digitalized and autonomous parts with an unprecedented level of

integration between information processing and physical tools, so called cyber-physical

systems. Among the most important properties of the future manufacturing systems

are self-organization and decentralization. Waschneck et al. (2016) give an overview

of different approaches in job shop scheduling in the context of Industry 4.0. Self-

organization and autonomy belongs to the key features of the future production control

systems. The future manufacturing systems have to rely on adaptive, learning, and

flexible optimization approaches. An idea for hyper-heuristics and machine learning is

described for combining the existing heuristics into more advanced ones. In the context

of Industry 4.0, Queiroz et al. (2017) present an agent-based industrial cyber-physical

system. The authors propose to combine MAS and data analytics technologies to provide

effective algorithms and support data-driven decision making. Agents are capable to

perform data analytics to enable the dynamic distributed and collaborative process

supervision and control.

6.1.3 Holonic Manufacturing Systems

The holonic manufacturing approach is intended to help production companies reaching

the described milestones. There is a number of possible benefits which HMS can provide

Page 125: Integrated Process Planning and Scheduling in Flexible Job

Chapter 6. Implementation aspects in form of a multi-agent system 106

(cf. McFarlane and Bussmann 2003). Among them:

• ”plug-and-play” approach to designing and operating a manufacturing system;

• fast reaction on different disturbances, e.g., machine breakdowns;

• fast and easy reconfigurability of the production system.

According to Bussmann (1998b), HMS is described as an organizing principle for

structuring and controlling manufacturing tasks. A holon - is an autonomous cooperating

agent which serves the purposes of transportation, transformation and saving of physical

and informational objects (cf. Monch 2006). Every holon must have the following key

properties (cf. McFarlane and Bussmann 2000, McFarlane and Bussmann 2003):

• autonomy - capability of a manufacturing unit to create, control and execute its

own plans and strategies;

• cooperation - ability of a group of holons to develop mutually acceptable plans and

to execute them;

• self-organization - ability of holons to collect and arrange themselves in order to

achieve some production goals. Organizations of holons are known as ”holoarchies”.

A holon has to be reconfigurable as well, i.e., it has to be possible to alter the function

of the manufacturing unit in a timely and cost effective manner.

A typical holon consists of two parts (cf. Marık et al. 2002):

• a hardware/equipment part (optional) able to physically influence the real world;

• a control part responsible for both controlling the hardware part and communica-

tion with other holons from the holoarchy.

Summarizing, a general architecture of a holon is depicted in Figure 6.1.

DecisionMaking

Inte

r H

olo

nIn

terf

ace

Hum

an

Inte

rface

Physical Control

Physical Element

Intra Holon Interface

Figure 6.1: General architecture of a holon.

Page 126: Integrated Process Planning and Scheduling in Flexible Job

Chapter 6. Implementation aspects in form of a multi-agent system 107

6.1.4 PROSA Reference Architecture

Van Brussel et al. (1998) describe a general PROSA reference architecture for HMS.

PROSA stands for Product-Resource-Order-Staff Architecture and consists of three

main types of basic holons: order holons, product holons, and resource holons. The

architecture covers both hierarchical and heterarchical control approaches and introduces

such innovations as decoupling of the system structure from the control algorithm,

decoupling of the logistical aspects from the technical ones. There is a possibility of

achieving more advanced hybrid control algorithms.

• An order holon is responsible to perform an assigned task correct and on time.

It manages the physical product, the product state model and the logistical

information processing related to the job.

• A product holon holds the process and product knowledge to assure the correct

making of the product with sufficient quality. This holon acts as an information

server for the other holons in the HMS.

• A resource holon contains a production resource of the production system and an

information processing part that controls the resource.

A general scheme of an HMS is depicted in Figure 6.2. The product holon communicates

production knowledge to the order holon and process knowledge to the resource holon.

The order holon and the resource holon exchange process execution knowledge since both

of them are responsible for physical manufacturing of products.

Orderholon

Productholon

Resourceholon

Productionknowledge

Proc

ess

know

ledg

e

Process execution

knowledge

Figure 6.2: General scheme of an HMS.

Besides the described holon types, the architecture foresees staff holons. Staff holons

are considered only as support holons that provide expert information to other holons.

Staff holons act as expert advisors for the decision-making holons.

Page 127: Integrated Process Planning and Scheduling in Flexible Job

Chapter 6. Implementation aspects in form of a multi-agent system 108

6.1.5 Multi-Agent Systems

According to Bussmann (1998b), a MAS can be considered as a software technology for

realizing information processing in HMS. An agent is an autonomous entity that can

carry out some actions on behalf of another agent or a user (cf. Monch and Stehli 2006).

Software agents share several fundamental features with holons: agents are able to act

autonomously and are able to communicate with each other in order to make joint

decisions. Every agent tries to reach its own goals. Unlike holons, an agent is not

composed of several other agents (cf. Marık et al. 2002).

The main motivation of using MAS for manufacturing control can be found for

example in (Monch 2006). MAS can perform well in dynamically changing turbulent

environments bringing such advantages as:

• easy modeling of naturally modular production systems, since agents exist as

objects;

• agents are able to make decentralized decisions with local data management and

are usually suitable for decentralized production processes;

• multi-agent production systems are easier to reconfigure and adapt to different

changes and turbulence that can appear during the manufacturing process.

Like in the case with holons, there are three main types of agents (cf. Monch 2006):

• order agent - represents some concrete order in the production system and decides

on which machines (machine groups) the order has to be executed;

• resource agent - models some resource of the production system, contains necessary

information for machine load and makes corresponding decisions;

• product agent - contains product knowledge which is represented by process plans

and quality definitions.

Based on the PROSA reference architecture, decision-making and staff agents are

distinguished. Each type has different responsibilities. Staff agents are purposed to

support the decision-making agents.

The decision-making agents are responsible for:

• preparing decisions;

• making decisions;

• providing information support for other decision-making agents;

• activation of other decision-making agents;

• time driven request of results of staff agents, request of services of other decision-

making agents;

• treatment of exceptions;

Page 128: Integrated Process Planning and Scheduling in Flexible Job

Chapter 6. Implementation aspects in form of a multi-agent system 109

• checking the list of current or future agent activities for relevant entries.

The main tasks of the staff agents are:

• prepare problem solution;

• calculate internal parameters of the problem solution methods;

• feed the problem solution method with data and parameters;

• solve the problem;

• interrupt the solution process in an event or time driven manner;

• provide the results to other agents;

• treat exceptions.

Order, resource and product agents are decision-making agents. An example of a staff

agent can be an implementation of the scheduling algorithm (cf. Monch and Stehli 2006).

Every agent consists of two parts (cf. Marık et al. 2002):

• a wrapper - responsible for inter-agent communication and real-time reactivity;

• a body - the reasoning part of the agent, usually a peace of software, integrated by

the wrapper into a structure of agents.

Holonic and agent-based approaches for manufacturing control have many similar-

ities. However, the purposes of these paradigms are not identical. As stated in

(Bussmann 1998a), HMS is an organizing principle for structuring and controlling

manufacturing process, whereas MAS is a software technology for realizing information

processing in HMS.

6.1.6 Holonic Agents

The similarities between holons and agents suggest to derive a holon model based on

holonic agents (cf. Marık et al. 2002). According to this model, a holonic agent consists

of two parts:

• a holonic part with physical layer of the manufacturing system;

• a software agent for higher level, real-time or non-real-time intelligent decision

making.

Taking this into account, a representation of holonic agents is given in Figure 6.3.

Holonic agents are able to organize their mutual decision-making and their work by

means of communication. The are three types of communication:

• intra-holon - communication between software agents and the physical part;

Page 129: Integrated Process Planning and Scheduling in Flexible Job

Chapter 6. Implementation aspects in form of a multi-agent system 110

Agent

PhysicalPart

Holon

Agent

PhysicalPart

Holon Agent

PhysicalPart

HolonCommunication

Communic. Communic

.

Figure 6.3: Holonic agent architecture.

• inter-holon - communication between agent-based parts of multiple holons;

• direct - communication between physical parts of the neighboring holons.

Such architecture allows holonic agents to behave like holons and like software agents

at the same time.

6.2 Integration Aspects for the Developed Heuristics in

Multi-Agent Systems

6.2.1 Agent Types

In this section, we discuss the possibilities of integrating the heuristics and approaches

presented in Chapter 3, Chapter 4, and Chapter 5 into a MAS. The proposed MAS is

based on the PROSA reference architecture.

We consider such agents as:

• factory agent - coordinates the execution processes, process planning and

scheduling, and the interactions between other agents, if needed;

• resource agents - manage the manufacturing resources;

• product agents - contain information about products and the corresponding

manufacturing possibilities;

• order agents - manage separate orders in the manufacturing system;

• process planner agent - implements logic to generate process plans;

• scheduler agent - computes schedules according to the selected process plans.

Next, we describe each agent, its behavior and how the agents interact. For each agent,

we summarize its main characteristics, namely its purpose, the data this agent contains,

the possible actions of the agent, and its outgoing and incoming communication.

Page 130: Integrated Process Planning and Scheduling in Flexible Job

Chapter 6. Implementation aspects in form of a multi-agent system 111

6.2.2 Factory Agent

The factory agent is intended to coordinate some interactions between other agents

and to allow for fast reaction on different events occurring in a dynamic manufacturing

system under consideration of data collected from all other agents. The factory agent is

a decision-making agent. It decides when the process planning and scheduling has to be

performed. The main characteristics of the factory agent are summarized in Table 6.1.

Table 6.1: Characteristics of the factory agent.

Pu

rpos

e

coordination of interactions between different agent;

implementation of the rolling horizon approach.

Dat

a parameters defining the time points for (re)scheduling and process (re)planning activities;

rules defining the system’s behavior in case of events like order arrivals, machine breakdowns.

Act

ion

s

react in real time on any state changes in the production system;

inform other agents about the state changes;

trigger process planning and/or scheduling according to the rolling horizon approach;

create/manage agents for new orders, products, and resources.

Ou

tgoi

ng

Product agents information about the new orders.

Process planner

agent

notification triggering the process (re)planning and (re)scheduling activities;

notification requesting to publish the recent process plans.

Scheduler agentnotification triggering the (re)scheduling activities;

notification requesting to publish the recent schedule.

Inco

min

g

Resource agentsinformation about state changes of the manufacturing resources;

information on availability of the resources, e.g., calendar changes etc.

Product agents

information about new processing possibilities for the existing products;

information about inactive processing possibilities for the existing products;

information about phasing-out the existing products;

notifications about a need to perform process planning and scheduling.

Order agents information about order modification or cancellation.

Process planner

agentnotification about the finished process planning cycle.

Scheduler agent notification about the finished scheduling cycle.

Page 131: Integrated Process Planning and Scheduling in Flexible Job

Chapter 6. Implementation aspects in form of a multi-agent system 112

The factory agent can analyze the current state of the shop floor, available possibilities

to manufacture the products, arrival of new orders or cancellation of the existing ones,

and depending on the situation trigger process (re)planning and/or (re)scheduling. For

example, a new IPPS cycle can be started in case one of the bottleneck machines breaks

down or is closed for maintenance. Regarding orders, the factory agent can decide to

start process planning and scheduling only after a certain number of new orders arrive,

or some orders are canceled. If one of the manufacturing alternatives becomes infeasible

for any reason, such as component shortage, (re)planning/(re)scheduling is required to

adapt the schedule to the new circumstances.

This agent also implements the time-based decomposition in form of the rolling horizon

approach as described in Chapter 5. The advantage of using the factory agent is that

it can drive the complete production controlling system taking into account local

information available from other agents. The system can react on different events fast

based on the complete picture about the shop floor. At the same time, process planning,

scheduling, and processing decisions can be taken in a distributed manner.

6.2.3 Resource Agents

Following Monch (2006), each machine group can be modeled by a separate resource

agent. This agent is a decision-making agent and has the properties presented in

Table 6.2. A resource agent analyzes the schedule received from the scheduler agent,

and the orders or subparts waiting in the queue for their processing. Depending on the

internal rules, the agent can make the following decisions:

• Which order/subpart has to be selected for processing next. Depending on the

internal logic, the agent can violate the existing schedule and select any other

available order. This is often the case in real production systems if the next

scheduled order/subpart is not available due to technical, stochastic, or any other

reasons. By default, the agent has to follow the schedule. If there is a blocking

issue then the next scheduled order has to be selected.

• Which machine has to be used for processing the selected order. The scheduled

machine can break down or run slower than expected. The agent has to be able

to estimate which machine is the best choice. Either the scheduled machine or

one of the alternative machines can be selected. By default, the agent has to keep

the originally planned machine for the selected order. If this is not possible then

the agent can use an estimated completion time of the operation and select the

machine capable to finish the operation at the earliest point of time.

The resource agents can request information about the estimated processing times

for different operations from the product agents, if required. This action can happen

immediately before the resource agent makes a decision which machine has to be used

for processing of the next operation. It is advantageous to collect statistics about the real

Page 132: Integrated Process Planning and Scheduling in Flexible Job

Chapter 6. Implementation aspects in form of a multi-agent system 113

performance of the resources. For example, this is the case for the processing times since

the values provided by the ERP systems are often only roughly estimated. The statistical

data saved in historical data bases are used for evaluation of realistic processing times.

Information about the load of the machines can help to identify bottlenecks. These data

can be sent to the product and scheduler agents to improve the IPPS results.

Table 6.2: Characteristics of resource agents.

Pu

rpos

e

machine group management;

local processing decisions.

Dat

a

status of each machine;

recent schedule relevant for the machines belonging to this machine group;

current order on each machine;

list of orders waiting in the queue for their processing;

information about the availability of the machines;

information and rules to decide which order/subpart has to be selected next on which machine;

statistics about the performance of the machines.

Act

ion

s

react in real time on any state changes on the machines;

decide which order/subpart has to be processed on which machine;

inform other agents about the state changes;

collect production statistics and communicate them to other agents.

Ou

tgoi

ng

Product agentsstatistics about the real processing times for different operations;

statistics about machine loads helping identify bottlenecks more easily.

Order agentsinformation about any state changes related to the orders, e.g., start or finish;

estimated time required for processing on the assigned machine.

Factory agentinformation about any machine state changes;

information about machine availability changes.

Scheduler agent information on the machine loads helping identify bottlenecks easily.

Inco

min

g

Product agentsassignment of different operations to machine groups;

estimated processing times of different operations.

Order agentsinformation about order/subpart arrival/leave;

information about the orders, such as their due dates, priorities etc.

Scheduler agent schedule relevant for this machine group.

Page 133: Integrated Process Planning and Scheduling in Flexible Job

Chapter 6. Implementation aspects in form of a multi-agent system 114

6.2.4 Product Agents

Product agents contain information about alternative manufacturing processes. Each

product is modeled by a dedicated agent with the properties summarized in Table 6.3.

Table 6.3: Characteristics of product agents.

Pu

rpos

e

source of information about available manufacturing processes;

local decisions about the used manufacturing processes.

Dat

a

description of alternative BOMs for the current product;

description of alternative routes for different subparts (shared among all product agents);

estimated costs relevant for selecting this or other manufacturing alternative;

rules for selecting manufacturing alternatives, such as the preferred BOM and subpart routes;

performance statistics, such as the TWT, to support mutually beneficial decision making.

Act

ion

s provide information about different ways of manufacturing for the corresponding product;

decide which BOM and subpart routes have to be used for manufacturing the product’s orders;

collect statistics about the used manufacturing alternatives and communicate to other agents.

Ou

tgoi

ng

Resource agentsassignment of different operations to the machine groups;

estimated processing times for different operations on different machines.

Order agents information about the selected BOMs and routes for the corresponding orders.

Process planner

agentinformation about the selected BOM and subpart routes.

Factory agent

information about any changes in the BOMs and routes, i.e., new BOMs etc.;

information about phasing-out of the existing products;

notification about a need to run process planning and scheduling.

Inco

min

g

Resource agents

notifications about the status of the manufacturing resources, e.g., availability;

estimate of processing times of different operations based on statistical data;

information about the capacity load of the machine groups.

Order agents notification about modifications or cancellation of the existing orders.

Process planner

agent

request to select a new process plan for this product;

performance indicator(s) describing how good the previous decision was;

new process plan resulting from the solution process.

Factory agent notifications about new order arrivals.

Page 134: Integrated Process Planning and Scheduling in Flexible Job

Chapter 6. Implementation aspects in form of a multi-agent system 115

Product agents belong to the decision-making agents responsible for selecting the

manufacturing processes. Based on the state changes received from the resource agents,

a product agent can assess the feasibility of the selected process plan in real time. Due to

potential machine breakdowns or overloads the quality of these plans might deteriorate to

a high extent. Hence, the product agent might intend to select an alternative process plan

which is feasible or could lead to a better load balancing. For this purpose the product

agent notifies the factory agent about a need to perform a new planning cycle. If new

orders arrive, or some orders are canceled, the product agent can recalculate the priorities

of different process plans to ensure that they are more suitable for the amount of orders

to be produced. Product agents can be notified by the process planner agent whether

a new decision on the manufacturing alternatives for the current product is required.

This notification does not necessarily mean that the product agent will select any other

BOM or subpart routes. The product agent can consider the status of the manufacturing

resources, the cost of different process alternatives and their preferences, the performance

indicator(s) measuring how good the previous decision was. Based on this information

and the statistics, the agent can make decisions regarding a new BOM and subpart

routes. In view of the planning algorithm presented in Chapter 4, the product agent can

follow the requests of the process planner by selecting alternative feasible process plans

for this product and communicating them back for further scheduling.

6.2.5 Order Agents

Order agents are decision-making agents which contain information about the orders

existing in the manufacturing system. They define priorities of the orders and manage

information about the customer agreements such as by when the orders have to be

produced. Each order is modeled using a dedicated order agent. The main characteristics

of the order agents are described in Table 6.4.

When a new order arrives in the production system, a new agent responsible for this

order is created by the factory agent. The factory agent, depending on its internal logic,

can do one of the following actions:

• Perform no special actions and let the order agent wait until the next cycle of

IPPS.

• Trigger the corresponding product agent to provide a process plan to the order

agent. Next, the scheduler agent is notified about the order and a new schedule

considering the current state of the manufacturing resources and the complete

range of orders is generated. The relevant part of the schedule is communicated to

the new order agent.

• Trigger the IPPS process performed by the process planning and the scheduling

agents.

Page 135: Integrated Process Planning and Scheduling in Flexible Job

Chapter 6. Implementation aspects in form of a multi-agent system 116

In any of these cases, the order agent is informed about the process and the

complementary schedule it has to follow. The order starts its journey through the

manufacturing environment. As soon as the order’s subparts start moving towards the

next planned machine groups the order agent informs the relevant resource agents and,

possibly, estimates the time when the subparts will arrive. When a subpart arrives at

the machine group the resource agent receives a notification.

Table 6.4: Characteristics of order agents.

Pu

rpos

e managing order-relevant data;

ensuring that the corresponding subparts are moved towards the right machine groups;

local decisions about the used processes.

Dat

a

order-relevant data such as the priority, the due date etc.;

information about the selected process plan this order has to follow;

information about the current state of the order, i.e., the execution progress.

Act

ion

s

provide planning-relevant information about the order;

collect and provide information about the current execution progress;

estimate the completion time of the order based on the current status of the system;

communicate with transportation units to ensure that the order’s subparts travel to the right

machine group at the right points of time;

decide whether the order has to follow the computed process plan, or keep the original one.

Ou

tgoin

g

Resource agentsnotify the resource agent about the upcoming arrival of the order’s subpart;

information about the order, such as its due date, priority etc.

Product agents notify about modifications or cancellation of the existing order.

Process planner

agent

information about any status changes related to the order, i.e., changed

priority, or due date, or release time;

information about the execution state of the order.

Factory agent information about modifications or cancellation of the order.

Inco

min

g Resource agents

information about status changes of the relevant machines;

estimate of processing times of different operations based on statistical data;

estimate of waiting times at the relevant machines or machine groups.

Product agents information about new process planning decisions.

Scheduler agent schedule relevant for this order.

Page 136: Integrated Process Planning and Scheduling in Flexible Job

Chapter 6. Implementation aspects in form of a multi-agent system 117

We point out that the factory agent could wait until several new order agents are created

and only then trigger the IPPS cycle.

As soon as one of the machines starts processing the order’s subparts the order agent

registers the relevant execution information such as the execution start time, the machine

assigned for execution etc. When the processing is completed, the order agent collects

the relevant statistical data on the finished operation and the order/subpart starts its

journey to the next planned machine group.

While traveling between the manufacturing resources the order can undergo certain

changes. For example, process planning and/or scheduling can occur every t4 minutes

to ensure good performance of the complete manufacturing system (see Chapter 5).

Depending on the reasoning part of the order agent, it might make a decision either

to accept or to reject new process plans. Preserving the selected process plans is often

desired since preparation of supplementary materials required during the manufacturing

processes can require a high effort. Hence, changing process plans often can lead to

instability in the shop floor. Such orders have dedicated process plans and do not accept

alternative processes anymore. If the order agent accepts a new process plan then its

order starts following the corresponding schedule immediately.

As soon as the order finishes its last operation, its agent processes the relevant statistical

data and sends them to the historical data base. The data can be used for reporting

and statistical calculations. The order agent can continue existing until the order itself

leaves the production area. Afterwards, this agent is destroyed.

6.2.6 Process Planner Agent

The process planner agent is responsible for computing high-quality process plans. It is a

staff agent which encapsulates the algorithm described in Subsection 4.3.1 and supports

the decision-making agents by proposing high-quality process plans. We summarize the

main characteristics of the process planner agent in Table 6.5. The process planner acts

according to the implemented algorithm. Following the descriptions in Subsection 4.3.1,

the process planner selects a subset of product agents in each iteration of the VNS

algorithm and requests new process plans. The product agents communicate back

the new, possibly preferred, process plans. This information is communicated to the

scheduling agent. Based on the scheduling results, the process planner can compare the

new generated process plan(s) and decide whether to accept them or not.

When the process planning procedure is completed the process planner agent informs

the factory agent. On request, the selected local process plans are communicated to the

corresponding product agents as well as to the order agents.

We point out the the solution architecture described in Subsection 5.4.3 can be used to

implement the process planner and the scheduler agents. Note that in case of coupled

Page 137: Integrated Process Planning and Scheduling in Flexible Job

Chapter 6. Implementation aspects in form of a multi-agent system 118

process planning and scheduling (cf. Subsection 4.3.2), it is sufficient to have only one

agent that performs these activities simultaneously.

Table 6.5: Characteristics of the process planner agent.

Pu

rpos

e

compute a high-quality global process plan through the selection of local process plans;

support decision making for the product and the resource agents.

Dat

a

rules to perform process planning;

parameters defining the solution procedure on the process planning level;

global process plan selected on the previous planning step;

new global process plan(s) which has(have) to be assessed.

Act

ion

s

trigger generation of new process plans according to the internal algorithm;

parallelize the algorithms depending on the situation;

provide new process plans to the scheduler agent;

compare generated process plans and select the best one;

communicate the resulting process plan.

Ou

tgoi

ng

Product agents

trigger selection of alternative local process plans;

local process plans resulting from the solution process;

performance measure(s), such as TWT, for the complete production system.

Scheduler agent

new process plan(s) which has(have) to be assessed;

request to communicate the recent schedule to the resource agents and the

order agents.

Factory agent notification about the finished solution process.

Inco

min

g

Product agents local process plans proposed by the corresponding product agents.

Order agentsinformation about the orders such as their priorities;

current execution states of the orders.

Scheduler agentperformance indicator(s) allowing to estimate the quality of the generated

process plans.

Factory agent

notifications to start the solution process;

list of product agents and order agents which have to be considered during the

process planning;

a request to communicate the selected process plans.

Page 138: Integrated Process Planning and Scheduling in Flexible Job

Chapter 6. Implementation aspects in form of a multi-agent system 119

6.2.7 Scheduler Agent

The scheduler agent is responsible for performing temporal assignments of different

production steps to the available resources. It is a staff agent which can encapsulate the

algorithms described in Section 3.3 and supports the process planner agent as well as

the other agents. The characteristics of the scheduler agent are summarized in Table 6.6.

Table 6.6: Characteristics of the scheduler agent.

Pu

rpos

e assign operations to machines, define their sequences and start times;

support the process planner agent when assessing new generated process plans;

provide recent scheduling decisions to other agents.

Dat

a

rules for performing scheduling;

parameters defining the solution procedure on the scheduling level;

recent production schedule;

performance indicators for the schedule, such as TWT.

Act

ion

s

compute schedules based on the input from the process planning agent;

parallelize the algorithms depending on the situation;

reschedule orders/subparts on request based on the current status of the machining resources;

provide relevant performance indicators to the process planner;

communicate relevant parts of the schedule to the order and the resource agents.

Ou

tgoin

g

Resource agents relevant parts of the recent schedule.

Order agents relevant parts of the recent schedule.

Process planner

agentkey performance indicators relevant for assessing the generated process plans.

Factory agent notification about the finished scheduling.

Inco

min

g

Resource agents state of the machines and their availability.

Process planner

agent

notifications to start (re)scheduling;

global process plans.

Factory agent

notifications to start (re)scheduling;

list of orders which have to undergo scheduling;

a request to communicate the schedule.

The scheduler agent can be triggered either by the process planner agent, or by the

Page 139: Integrated Process Planning and Scheduling in Flexible Job

Chapter 6. Implementation aspects in form of a multi-agent system 120

factory agent, provided only rescheduling is required. At the moment the factory agent

decides to start a new IPPS cycle, the scheduler agent queries the state and the

availability of machines from the resource agents. In case of rescheduling only, the

scheduler agent requests additionally the global process plan from the process planner

agent.

Next, information from the relevant order agents is analyzed to build a disjunctive

solution graph (see Section 3.1). The order agents are asked about the possibility to

modify their current process plans. If an order does not have any process plan then

the one provided by the process planner agent is inserted into the disjunctive graph.

However, if an order already has a feasible process plan and it is not desired to change

it then this plan is used.

As soon as the disjunctive graph for FJS scheduling is created, the scheduler agent

analyzes the current state of the manufacturing system and performs actions to assign

the order subparts to the machines, defines their sequences and start times. Note that

the scheduler agent can evaluate several alternative process plans in parallel to ensure

better utilization of the provided hardware. Next, the TWT over all scheduled orders is

calculated and sent to the process planner agent. As soon as the scheduling activities

are finished, the factory agent is notified. On request, the relevant parts of the schedule

are communicated to the resource agents and the order agents.

6.2.8 Multi-Agent System Framework

We described different agents relevant for organizing the algorithms presented in

Chapter 3, Chapter 4, and Chapter 5 in form of a MAS. The proposed system has the

following key features important for MAS (cf. Monch and Stehli 2006):

• representation of decision rules;

• modeling of process restrictions;

• support of discrete event simulation.

The described set of agents can be implemented based on ManufAg - a MAS developed

by Monch and Stehli (2006) for production control and complex manufacturing systems.

One of the important features of this framework is that it provides a message transport

system for inter-agent communication. Ontologies and content languages are important

for a meaningful communication of different agents. An ontology has to define domain-

specific concepts. The content language can rely on XML objects derived from the

concepts described in the ontology. Agents can be created, destroyed, and managed

within the agent management system (AMS). A service-blackboard can be used to find

agents that offer a required service.

Page 140: Integrated Process Planning and Scheduling in Flexible Job

Chapter 7

Conclusions and Future Research

Directions

Scheduling problems for FJS with TWT objective were studied in this work. An

important contribution is the iterative LS heuristic that is able to find high-quality

solutions in a reasonable amount of time. This heuristic is based on the efficient DTO

approach. It is proven that the LS is capable to find optimal solutions for an FJS problem

with any regular criterion, including TWT. The SBH is hybridized with this heuristic.

We also propose a VNS scheme that can be applied to solve either the subproblems

for the SBH, or the entire scheduling problem. Because all of the heuristics are able

to find high-quality solutions fast, they can be applied to large-size problem instances.

Several effects are observed in an extensive computational study of the heuristics. The

number of machines available for each operation has a significant influence on the

TWT improvement when the advanced heuristics are compared to list scheduling. The

benefit of using more sophisticated heuristics is less significant at a high flexibility level.

In addition, the TWT improvement is smaller for the case of identical machines. In

this situation, the SBH-type heuristics outperform the remaining approaches. When

unrelated machines are considered, the LS-based approaches perform better for tight

and moderate due date ranges. In the case of loose due dates, the SBH-type approaches

perform best. We conclude that the SBH, hybridized with LS approaches, is able to

outperform the heuristics that are purely based on LS, if there is enough computing

time available. However, if the computing time is limited then the LS approach is more

favorable.

Metaheuristics for large-scale IPPS problems are proposed in this work. We consider

alternative BOMs and routes on the process planning level. Machine flexibility in

terms of parallel machines is taken into account. We propose a new VNS-based

heuristic to select high-quality process plans. The assessment of each process plan

requires the solution of an FJS scheduling problem. We compare this heuristic with

a memetic algorithm based on the GA proposed by Park and Choi (2006). In contrast

121

Page 141: Integrated Process Planning and Scheduling in Flexible Job

Chapter 7. Conclusions and future research directions 122

to the VNS approach, the memetic algorithm simultaneously selects process plans and

determines schedules. Our VNS procedure decouples the selection of process plans

from scheduling and performs IPPS by iterating between these two activities. This

approach is beneficial because it allows to distribute process plan selection and scheduling

over different computers. Hence, the proposed iterative approach is well-suited for

embedding it into a MAS. Because our approach requires to solve a large number of

computationally hard scheduling instances, we apply different scheduling strategies to

reduce the computational burden. Since it is unlikely to determine process plans that lead

to solutions with near-to-optimal TWT values at the beginning of the search process,

we propose to spend less effort to find high-quality solutions in the first iterations of

the VNS. If the number of performed iterations increases, schedules of higher quality

are computed. In addition, parallelizing the shaking step of the proposed VNS approach

helps to find a compromise between solution quality and computing time. The overhead

due to the parallel implementation is minor, whereas doubling the amount of process

plans assessed in parallel allows to obtain solutions of nearly the same quality at almost

twice reduced amount of VNS iterations and the time.

We perform computational experiments on benchmark instances from the literature

and on randomly generated new moderate- and large-size problem instances. To the

best of our knowledge, large-size problem instances, although they are important in

real-world settings, are not considered in the literature with the rare exception of

Weintraub et al. (1999). We demonstrate that the VNS-based approach outperforms

the memetic algorithm under various process conditions.

The performance of the proposed integrated heuristic is assessed in a dynamic

manufacturing environment using simulation. The simulation models are generated

based on the static problem instances presented in Chapter 4. We consider deterministic

order interarrival times and fixed processing times on the machines. It is assumed that

no machine breakdowns can happen. We implement the rolling horizon approach which

is commonly used in many simulation studies in the literature. Our study is based

to a large extent on the simulation framework proposed in (Monch et al. 2003). The

framework consists of a production control module and a simulation module. These

modules interact according to predefined rules.

The computational experiments provide several interesting insights on the behavior of

the integrated heuristic. The most important insight is that this heuristic can potentially

improve the performance of the manufacturing environment significantly in terms of

TWT. The presented results are obtained for different simulation scenarios with multiple

repetitions over longer periods of simulated time. It turns out that the performance

of the integrated heuristic is more sensitive to the quality of the generated schedules

in the dynamic manufacturing environment, than in the static one. The possibility to

modify process plans of the orders during each IPPS cycle leads to the best results.

However, it is also possible to achieve decent performance improvement when the once

Page 142: Integrated Process Planning and Scheduling in Flexible Job

Chapter 7. Conclusions and future research directions 123

selected process plans are preserved. The results clearly reveal the benefit of applying

the advanced scheduling approaches over SDR.

It turns out that increasing the length of the planning horizon can decrease the

performance of the integrated heuristic. This is caused by the increased number of

operations that have to be scheduled and by the mix of already existing and new orders

in the system. Because the old orders impact the performance measure as well, this can

cause the process planning algorithm to select process plans of lower quality for the new

orders. Additional actions can be taken to overcome this issue, namely scheduling the

operations that are within the planning horizon and ignoring the ones that are beyond

the horizon’s limits. However, the implementation of this approach can be challenging

since the local due dates for the operations have to be estimated. As we know from

Chapter 3, it is more difficult to estimate the due dates appropriately in the case of

unrelated machines.

The increasing complexity and scale of the modern manufacturing systems as well as the

growing product range combined with decreasing lot sizes creates many challenges for

production control systems and requires them to become more intelligent and easily

adaptable to new dynamically changing circumstances. Self-organization, autonomy,

scalability, local data handling and decision making in view of global goals, and

maintainability are among the most important challenges for the future manufacturing

control systems. To face the described requirements, a concept of Industry 4.0 emerged

embracing a number of contemporary automation, data exchange and manufacturing

technologies. MAS is considered to be an appropriate technology to deal with the

new challenges. We describe on a high level a MAS that can incorporate the methods

developed in Chapter 3 and Chapter 4 for IPPS. This system is suitable for the dynamic

manufacturing environment considered in Chapter 5. We design this MAS based on the

widely accepted PROSA reference architecture proposed by Van Brussel et al. (1998).

There are several directions for the future research. Because many real-life problems can

be often formulated as FJS scheduling problems, we are interested in speeding up the

iterative LS by reusing the already existing parts of the longest paths while evaluating

the transitions. It seems to be possible to apply bounded incremental algorithms as

proposed by Katriel et al. (2005). The efficient move evaluation based on the lower

bounds for the operation completion times described by Garcia-Leon et al. (2015) could

be considered. In addition, moves involving multiple operations are desirable. Feasibility

of moves by NSLS is established based on the heads of the operations, whereas the

study of Garcia-Leon et al. (2015) reveals that the amount of possible moves significantly

increases if the criterion combining the operation heads and tails is applied. Such a

criterion is expected to improve the performance of the LS presented in our work.

It is also interesting to compare the performance of the two LS approaches in view

of solution quality and computing time using the same set of problem instances.

Another related question is how to increase the quality of the ready time and due

Page 143: Integrated Process Planning and Scheduling in Flexible Job

Chapter 7. Conclusions and future research directions 124

date estimates in the disjunctive graph. Poor estimates have a strong impact on the

quality of schedules for the subproblems that contain unrelated machines. We believe

that it is worthwhile to consider techniques from parallel computing, especially graphics

processing unit (GPU), to speed-up the computations (cf. Czapinski and Barnes 2011).

It is interesting to tackle FJS problems with uncertain release and processing times by

sampling techniques. Extending the heuristics to consider batching problems, like the

one described in (Sobeyko and Monch 2015), is also an interesting research direction.

We observe that parallelizing the VNS approach leads to improvements in terms of

solution quality and computing effort. We expect that the performance can be further

significantly improved when the parallelization parameter nexpl becomes much larger.

This is attractive, especially with respect to applying emerging technologies such as

cloud computing or GPU programming in manufacturing environments. In this work,

we consider solution methods for the integrated problem in view of TWT as the objective

function. However, it is interesting to consider additional objectives like the production

cost when selecting different process plans. In this case a multi-objective optimization

can be performed. An alternative is to optimize a weighted linear combination of the

objective functions.

The state of a real-life manufacturing environment can change between the moment

when the scheduling procedures start and the moment when the final schedule becomes

available, thus making it slightly outdated on a short horizon. It is interesting to

implement a logic able to manage such situations in real time and perform slight

adaptations of the schedule, if required.

In our simulation study, we consider a deterministic dynamic manufacturing environ-

ment. But many effects in real-life applications are of a stochastic nature. An important

future research direction is the performance assessment of the integrated heuristic for

stochastic order interarrivals, operation processing times as well as machine breakdowns.

Another direction is the simulation-based benchmarking of the presented approach

against other state-of-the-art heuristics for IPPS. An interesting question is whether

it is sufficient to perform the entire IPPS cycle only at certain conditions, such as

the amount of new orders arriving in the system, and not on the time basis. This

might potentially save a certain amount of time required to compute new schedules,

while having a negligible impact on the results. Scheduling only the operations within

the planning horizon might also become a significant improvement. In this case, it is

important to have a good estimate of the expected processing times for each operation.

Otherwise, the amount of operations during the scheduling process might be either too

high, or too low.

It is very interesting to implement the proposed algorithms in form of a MAS based

on the PROSA architecture (cf. Van Belle et al. 2012) since this allows for distributed

approaches for IPPS. It is expected within the Industry 4.0 vision that the next genera-

tion information systems in manufacturing will be agent-based (cf. Lasi et al. 2014). The

Page 144: Integrated Process Planning and Scheduling in Flexible Job

Chapter 7. Conclusions and future research directions 125

multi-agent framework ManufAg developed by Monch and Stehli (2006) seems to be an

appropriate technology to implement the multi-agent approach described in Chapter 6.

Implementation of the proposed system of agents based on the ManufAg framework,

development of the appropriate ontology and content language for communication

between the agents, and simulation-based assessment of the MAS are the clear research

directions for the future.

Finally, taking into account the variety of different problems and solution approaches

for IPPS, it might be beneficial to develop a generic framework that allows to model

and implement different solution methods with a low effort. This requires a common

generic data model and a common approach for data storing. Possibilities for applying

different simulation frameworks like the one proposed by Monch and Rose (2004) have

to be foreseen. It might be an advantage for the framework to support multi-agent

architectures in its core.

Page 145: Integrated Process Planning and Scheduling in Flexible Job
Page 146: Integrated Process Planning and Scheduling in Flexible Job

Bibliography

J. Adams, E. Balas, and D. Zawack. The shifting bottleneck procedure for job shop

scheduling. Management Science, 34(3):391–401, 1988.

N. Ahmad, A.F.M.A. Haque, and A.A. Hasin. Current trend in computer aided

process planning. In Proceedings of the 7th Annual Paper Meet and 2nd International

Conference, volume 25, page 27, 2001.

L. Alting and H. Zhang. Computer aided process planning: the state-of-the-art survey.

International Journal of Production Research, 27(4):553–585, 1989.

H. Aytug, K. Kempf, and R. Uzsoy. Measures of subproblem criticality in decomposition

algorithms for shop scheduling. International Journal of Production Research, 41(5):

865–882, 2003.

K. R. Baker and J. J. Kanet. Job shop scheduling with modified due dates. Journal of

Operations Management, 4(1):11–22, 1983.

A. Balakrishnan and J. Geunes. Requirements planning with substitutions: exploiting

bill-of-materials flexibility in production planning. Manufacturing & Service

Operations Management, 2(2):166–185, 2000.

E. Balas and A. Vazacopoulos. Guided local search with shifting bottleneck for job shop

scheduling. Management Science, 44(2):262–275, 1998.

J. Bank and F. Werner. Heuristic algorithms for unrelated parallel machine scheduling

with a common due date, release dates, and linear earliness and tardiness penalties.

Mathematical and Computer Modelling, 33(4):363–383, 2001.

D. Behnke and M. J. Geiger. Test instances for the flexible job shop scheduling problem

with work centers. Technical report, Helmut-Schmidt-Universitat Hamburg, 2012.

P. Brandimarte. Routing and scheduling in a flexible job shop by tabu search. Annals

of Operations Research, 41(3):157–183, 1993.

P. Brandimarte and M. Calderini. A hierarchical bicriterion approach to integrated

process plan selection and job shop scheduling. International Journal of Production

Research, 33(1):161–181, 1995.

127

Page 147: Integrated Process Planning and Scheduling in Flexible Job

Bibliography 128

R. Braune and G. Zapfel. Shifting bottleneck scheduling for total weighted tardiness

minimization—a computational evaluation of subproblem and re-optimization

heuristics. Computers & Operations Research, 66:130–140, 2016.

T. Bray, J. Paoli, C. M. Sperberg-McQueen, E. Maler, and F. Yergeau. Extensible

markup language (XML). World Wide Web Journal, 2(4):27–66, 1997.

P. Brucker and R. Schlie. Job-shop scheduling with multi-purpose machines. Computing,

45(4):369–375, January 1991.

K. Bulbul. A hybrid shifting bottleneck-tabu search heuristic for the job shop total

weighted tardiness problem. Computers & Operations Research, 38(6):967–983, 2011.

S. Bussmann. An agent-oriented architecture for holonic manufacturing control. In

Proceedings of First International Workshop on IMS, Lausanne, Switzerland, pages

1–12, 1998a.

S. Bussmann. Agent-oriented programming of manufacturing control tasks. In Yves

Demazeau, editor, ICMAS, pages 57–63. IEEE Computer Society, 1998b.

R. Capek, P. Sucha, and Z. Hanzalek. Production scheduling with alternative process

plans. European Journal of Operational Research, 217(2):300–311, 2012.

K.-P. Chen, M. S. Lee, P. S. Pulat, and S. A. Moses. The shifting bottleneck procedure

for job-shops with parallel machines. International Journal of Industrial and Systems

Engineering, 1(1):244–262, 2006.

M. Czapinski and S. Barnes. Tabu search with two approaches to parallel flowshop

evaluation on CUDA platform. Journal of Parallel and Distributed Computing, 71(6):

802–811, 2011.

S. Dauzere-Peres and J. Paulli. An integrated approach for modeling and solving the

general multiprocessor job-shop scheduling problem using tabu search. Annals of

Operations Research, 70:281–306, 1997.

S. Dauzere-Peres and C. Pavageau. Extensions of an integrated approach for multi-

resource shop scheduling. IEEE Transactions on Systems, Man, and Cybernetics,

Part C (Applications and Reviews), 33(2):207–213, 2003.

S. Dauzere-Peres, W. Roux, and J.B. Lasserre. Multi-resource shop scheduling with

resource flexibility. European Journal of Operational Research, 107(2):289–305, 1998.

S. M. Deen. Agent-based manufacturing: advances in the holonic approach. Springer,

2003.

E. Demirkol, S. Mehta, and R. Uzsoy. A computational study of shifting bottleneck

procedures for shop scheduling problems. Journal of Heuristics, 3(2):111–137, 1997.

Page 148: Integrated Process Planning and Scheduling in Flexible Job

Bibliography 129

R. Driessel and L. Monch. Variable neighborhood search approaches for scheduling jobs

on parallel machines with sequence-dependent setup times, precedence constraints,

and ready times. Computers & Industrial Engineering, 61(2):336–345, 2011.

R. Drießel and L. Monch. An integrated scheduling and material-handling approach

for complex job shops: a computational study. International Journal of Production

Research, 50(20):5966–5985, 2012.

I. Essafi, Y. Mati, and S. Dauzere-Peres. A genetic local search algorithm for minimizing

total weighted tardiness in the job-shop scheduling problem. Computers & Operations

Research, 35(8):2599–2616, August 2008.

K. R. Fall and W. R. Stevens. TCP/IP illustrated, volume 1: the protocols. Addison-

Wesley, 2011.

P. Fattahi, M. S. Mehrabad, and F. Jolai. Mathematical modeling and heuristic

approaches to flexible job shop scheduling problems. Journal of Intelligent

Manufacturing, 18:331–342, 2007.

L.M. Gambardella and M. Mastrolilli. Effective neighborhood functions for the flexible

job shop problem. Journal of Scheduling, 3(3):3–20, 1996.

J. Gao, L. Sun, and M. Gen. A hybrid genetic and variable neighborhood descent

algorithm for flexible job shop scheduling problems. Computers & Operations

Research, 35(9):2892–2907, 2008.

A. Garcia-Leon, S. Dauzere-Peres, and Y. Mati. Minimizing regular criteria in the flexible

job-shop scheduling problem. In Proceedings of the 7th Multidisciplinary International

Conference on Scheduling : Theory and Applications (MISTA 2015), 25 - 28 Aug 2015,

Prague, Czech Republic, pages 443–456, 8 2015.

M. R. Garey and D. S. Johnson. Computers and intractability: a guide to the theory of

NP-completeness. San Francisco : W.H. Freeman, c1979., 1979.

M. A. Gonzalez, C. R. Vela, and R. Varela. Scatter search with path relinking for the

flexible job shop scheduling problem. European Journal of Operational Research, 245

(1):35–45, 2015.

Y. W. Guo, W. D. Li, A. R. Mileham, and G. W. Owen. Applications of particle swarm

optimization in integrated process planning and scheduling. Robotics and Computer

Integrated Manufacturing, 25(2):280–288, 2009.

P. Hansen and N. Mladenovic. Variable neighborhood search: principles and applications.

European Journal of Operational Research, 130:449–467, 2001.

P. Hansen and N. Mladenovic. Variable neighborhood search, pages 145–184. Springer

US, Boston, MA, 2003.

Page 149: Integrated Process Planning and Scheduling in Flexible Job

Bibliography 130

S. M. K. Hasan, R. Sarker, D. Essam, and D. Cornforth. Memetic algorithms for solving

job-shop scheduling problems. Memetic Computing, 1(1):69–83, 2009.

N. B. Ho and J. C. Tay. Evolving dispatching rules for solving the flexible job-shop

problem. In Proceedings Congress on Evolutionary Computation, pages 2848–2855,

2005.

N. B. Ho, J. C. Tay, and E. M. Lai. An effective architecture for learning and evolving

flexible job-shop schedules. European Journal of Operational Research, 179(2):316–

333, 2007.

S. H. Huang, H.-C. Zhang, and M. L. Smith. A progressive approach for the integration

of process planning and scheduling. IIE Transactions, 27(4):456–464, 1995.

J. Hurink, B. Jurisch, and M. Thole. Tabu search for the job-shop scheduling problem

with multiple-purpose machines. OR Spectrum, 15:205–215, 1994.

G. K. Hutchinson and K. A. Pflughoeft. Flexible process plans: their value in flexible

automation systems. International Journal of Production Research, 32(3):707–719,

1994.

M. Ivanovic, M. Vidakovic, Z. Budimac, and D. Mitrovic. A scalable distributed

architecture for client and server-side software agents. Vietnam Journal of Computer

Science, 4(2):127–137, 2017.

P. Ivens and M. Lambrecht. Extending the shifting bottleneck procedure to real-life

applications. European Journal of Operational Research, 90(2):252–268, 1996.

J. J. Kanet and J. C. Hayya. Priority dispatching with operation due dates in a job

shop. Journal of Operations Management, 2(3):167–175, 1982.

J. J. Kanet and X. Li. A weighted modified due date rule for sequencing to minimize

weighted tardiness. Journal of Scheduling, 7(4):261–276, 2004.

I. Katriel, L. Michel, and P. V. Hentenryck. Maintaining longest paths incrementally.

Constraints, 10(2):159–183, 2005.

K. Kemppainen. Priority scheduling revisited - dominant rules, open protocols, and

integrated order management. PhD thesis, Helsinki Schoool of Economics, 2005.

B. Khoshnevis and Q. M. Chen. Integration of process planning and scheduling functions.

Journal of Intelligent Manufacturing, 2(3):165–175, 1991.

K.-H. Kim and P.J. Egbelu. Scheduling in a production environment with multiple

process plans per job. International Journal of Production Research, 37(12):2725–

2753, 1999.

Page 150: Integrated Process Planning and Scheduling in Flexible Job

Bibliography 131

Y. K. Kim, K. Park, and J. Ko. A symbiotic evolutionary algorithm for the integration

of process planning and job shop scheduling. Computers & Operations Research, 30

(8):1151–1171, 2003.

S. Kirkpatrick, C. D. Gelatt Jr., and M. P. Vecchi. Optimization by simulated annealing.

Science, 220:671–680, 1983.

T. Kis, D. Kiritsis, P. C. Xirouchakis, and K.-P. Neuendorf. A Petri net model

for integrated process and job shop production planning. Journal of Intelligent

Manufacturing, 11(2):191–207, 2000.

S. Kreipl. A large step random walk for minimizing total weighted tardiness in a job

shop. Journal of Scheduling, 3(3):125–138, 2000.

S. Kreipl and M. Pinedo. Planning and scheduling in supply chains: an overview of

issues in practice. Production and Operations Management, 13(1):77–92, 2004.

H. Lasi, P. Fettke, H.-G. Kemper, T. Feld, and M. Hoffmann. Industry 4.0. Business &

Information Systems Engineering, 6(4):239–242, 2014.

E. L. Lawler. A ”pseudopolynomial” algorithm for sequencing jobs to minimize total

tardiness. Annals of Discrete Mathematics, 1:331–342, 1977.

D. Y. Lee and F. DiCesare. Scheduling flexible manufacturing systems using Petri nets

and heuristic search. IEEE Transactions on Robotics and Automation, 10(2):123–123,

1994.

H. Lee and S.-S. Kim. Integration of process planning and scheduling using simulation-

based genetic algorithms. The International Journal of Advanced Manufacturing

Technology, 18(8):586–590, 2001.

Y. H. Lee, C. S. Jeong, and C. Moon. Advanced planning and scheduling with

outsourcing in manufacturing supply chain. Computers & Industrial Engineering,

43(1):351–374, 2002.

J. K. Lenstra, A. H. G. Rinnooy Kan, and P. Brucker. Complexity of machine scheduling

problems. Annals of Discrete Mathematics, 1:343–362, 1977.

C. W. Leung, T. N. Wong, K.-L. Mak, and R. Y. K. Fung. Integrated process planning

and scheduling by an agent-based ant colony optimization. Computers & Industrial

Engineering, 59(1):166–180, 2010.

W. D. Li and C. A. McMahon. A simulated annealing-based optimization approach

for integrated process planning and scheduling. International Journal of Computer

Integrated Manufacturing, 20(1):80–95, 2007.

X. Li, L. Gao, C. Zhang, and X. Shao. A review on integrated process planning and

scheduling. International Journal of Manufacturing Research, 5(2):161–180, 2010a.

Page 151: Integrated Process Planning and Scheduling in Flexible Job

Bibliography 132

X. Li, C. Zhang, L. Gao, W. Li, and X. Shao. An agent-based approach for integrated

process planning and scheduling. Expert Systems with Applications, 37(2):1256–1264,

2010b.

Z. Li and M. G. Ierapetritou. Rolling horizon based planning and scheduling integration

with production capacity consideration. Chemical Engineering Science, 65(22):5887–

5900, 2010.

K. Lian, C. Zhang, L. Gao, and X. Li. Integrated process planning and scheduling using

an imperialist competitive algorithm. International Journal of Production Research,

50(15):4326–4343, 2012.

Q. Lihong and L. Shengping. An improved genetic algorithm for integrated

process planning and scheduling. International Journal of Advanced Manufacturing

Technology, 58(5):727–740, 2012.

V. Marık, M. Fletcher, and M. Pechoucek. Holons & agents: recent developments and

mutual impacts. Multi-Agent Systems and Applications II, pages 89–106, 2002.

C. Marin, L. Monch, P. Leitao, P. Vrba, D. Kazanskaia, V. Chepegin, L. Liu,

and N. Mehandjiev. A conceptual architecture based on intelligent services for

manufacturing support systems. In Proceedings - 2013 IEEE International Conference

on Systems, Man, and Cybernetics, SMC 2013, pages 4749–4754, 10 2013.

H.B. Marri, A. Gunasekaran, and R.J. Grieve. Computer-aided process planning: a

state of art. The International Journal of Advanced Manufacturing Technology, 14(4):

261–268, 1998.

S. J. Mason, J. W. Fowler, and M. W. Carlyle. A modified shifting bottleneck heuristic

for minimizing total weighted tardiness in complex job shops. Journal of Scheduling,

5(3):247–262, 2002.

S. J. Mason, S. Jin, and C. M. Wessels. Rescheduling strategies for minimizing

total weighted tardiness in complex job shops. International Journal of Production

Research, 42(3):613–628, 2004.

S.J. Mason, J.W. Fowler, W.M. Carlyle, and D.C. Montgomery. Heuristics for

minimizing total weighted tardiness in complex job shops. International Journal of

Production Research, 43(10):1943–1963, 2005.

Y. Mati, S. Dauzere-Peres, and C. Lahlou. A general approach for optimizing regular

criteria in the job-shop scheduling problem. European Journal of Operational Research,

212(1):33–42, 2011.

D. C. McFarlane and S. Bussmann. Developments in holonic production planning and

control. Production Planning & Control, 11(6):522–536, 2000.

Page 152: Integrated Process Planning and Scheduling in Flexible Job

Bibliography 133

D. C. McFarlane and S. Bussmann. Holonic manufacturing control: rationales,

developments and open issues. In Agent-based Manufacturing, pages 303–326.

Springer, 2003.

L. Monch. Agentenbasierte Produktionssteuerung komplexer Produktionssysteme. Deut-

scher Universitatsverlag, Gabler, Wiesbaden, 2006.

L. Monch. Simulation-based benchmarking of production control schemes for complex

manufacturing systems. Control Engineering Practice, 15(11):1381–1393, 2007.

L. Monch and R. Drießel. A distributed shifting bottleneck heuristic for complex job

shops. Computers & Industrial Engineering, 49(3):363–380, 2005.

L. Monch and O. Rose. Shifting-Bottleneck-Heuristik fur komplexe Produktionssysteme:

softwaretechnische Realisierung und Leistungsbewertung. In Proceedings Multi-

Konferenz Wirtschaftsinformatik, Teilkonferenz Quantitative Methoden in ERP und

SCM, DSOR Contributions to Information Systems, volume 2, 2004.

L. Monch and M. Stehli. ManufAg: a multi-agent-system framework for production

control of complex manufacturing systems. Information Systems and e-Business

Management, 4(2):159–185, 2006.

L. Monch and J. Zimmermann. Improving the performance of dispatching rules in

semiconductor manufacturing by iterative simulation. In Simulation Conference, 2004.

Proceedings of the 2004 Winter, volume 2, pages 1881–1887, Dec 2004.

L. Monch and J. Zimmermann. Simulation-based assessment of machine criticality

measures for a shifting bottleneck scheduling approach in complex manufacturing

systems. Computers in Industry, 58(7):644–655, 2007.

L. Monch and J. Zimmermann. A computational study of a shifting bottleneck heuristic

for multi-product complex job shops. Production Planning and Control, 22(1):25–40,

2011.

L. Monch, O. Rose, and R. Sturm. A simulation framework for the performance

assessment of shop-floor control systems. Simulation, 79(3):163–170, 2003.

L. Monch, H. Balasubramanian, J. W. Fowler, and M. E. Pfund. Heuristic scheduling

of jobs on parallel batch machines with incompatible job families and unequal ready

times. Computers & Operations Research, 32:2731–2750, 2005.

L. Monch, R. Schabacker, D. Pabst, and J. W. Fowler. Genetic algorithm-based

subproblem solution procedures for a modified shifting bottleneck heuristic for

complex job shops. European Journal of Operational Research, 177(3):2100–2118,

2007.

Page 153: Integrated Process Planning and Scheduling in Flexible Job

Bibliography 134

L. Monch, J. W. Fowler, S. Dauzere-Peres, S. J. Mason, and O. Rose. Scheduling

semiconductor manufacturing operations: problems, solution techniques, and future

challenges. Journal of Scheduling, 14(6):583–595, 2011.

J. Mula, R. Poler, J. P. Garcia-Sabater, and F. C. Lario. Models for production planning

under uncertainty: a review. International Journal of Production Economics, 103(1):

271–285, 2006.

I. M. Ovacik and R. Uzsoy. Rolling horizon procedures for dynamic parallel machine

scheduling with sequence-dependent setup times. International Journal of Production

Research, 33(11):3173–3192, 1995.

D. Pacino and P. Van Hentenryck. Mathematical modeling and heuristic approaches

to flexible job shop scheduling problems. In Proceedings of the Twenty-Second

International Joint Conference on Artificial Intelligence, pages 1997–2002, 2011.

B. C. Park, E. S. Park, B. K. Choi, B. H. Kim, and J. H. Lee. Simulation based planning

and scheduling system for TFT-LCD fab. In Simulation Conference, 2008. WSC 2008.

Winter, pages 2271–2276, 2008.

B. J. Park and H. R. Choi. A genetic algorithm for integration of process planning

and scheduling in a job shop. In AI 2006: Advances in Artificial Intelligence, pages

647–657. Springer, 2006.

B. J. Park, H. R. Choi, and H. S. Kim. A hybrid genetic algorithm for the job shop

scheduling problems. Computers and Industrial Engineering, 45(4):597–613, 2003.

D. J. Pearce and P. H. J. Kelly. A dynamic topological sort algorithm for directed acyclic

graphs. ACM Journal of Experimental Algorithmics, 11, 2006.

J. Petronijevic, M. Petrovic, N. Vukovic, M. Mitic, B. Babic, and Z. Miljkovic. Integrated

process planning and scheduling using multi-agent methodology. Applied Mechanics

& Materials, 834:193–198, 2016.

F. Pezzella, G. Morganti, and G. Ciaschetti. A genetic algorithm for the flexible job-shop

scheduling problem. Computers & Operations Research, 35(10):3202–3212, 2008.

M. Pfund, J. W. Fowler, A. Gadkari, and Y. Chen. Scheduling jobs on parallel machines

with setup times and ready times. Computers & Industrial Engineering, 54(4):764–

782, 2008.

R. K. Phanden, A. Jain, and R. Verma. Integration of process planning and scheduling: a

state-of-the-art review. International Journal of Computer Integrated Manufacturing,

24(6):517–534, 2011.

R. K. Phanden, A. Jain, and R. Verma. An approach for integration of process planning

and scheduling. International Journal of Computer Integrated Manufacturing, 26(4):

284–302, 2013.

Page 154: Integrated Process Planning and Scheduling in Flexible Job

Bibliography 135

M. Pinedo and M. Singer. A shifting bottleneck heuristic for minimizing the total

weighted tardiness in a job shop. Naval Research Logistics (NRL), 46(1):1–17, 1999.

M. L. Pinedo. Scheduling: theory, algorithms, and systems. Springer, New York, third

edition, 2008.

J. Queiroz, P. Leitao, and E. Oliveira. Industrial cyber physical systems supported by

distributed advanced data analytics. In T. Borangiu, D. Trentesaux, A. Thomas,

P. Leitao, and J. B. Oliveira, editors, Service orientation in holonic and multi-agent

manufacturing, pages 47–59, Cham, 2017. Springer International Publishing.

B. Scholz-Reiter, T. Hildebrandt, and Y. Tan. Effective and efficient scheduling

of dynamic job shops—combining the shifting bottleneck procedure with variable

neighbourhood search. CIRP Annals-Manufacturing Technology, 62(1):423–426, 2013.

C. R. Scrich, V. A. Armentano, and M. Laguna. Tardiness minimization in a flexible job

shop: a tabu search approach. Journal of Intelligent Manufacturing, 15(1):103–115,

2004.

S. Sethi and G. Sorger. A theory of rolling horizon decision making. Annals of Operations

Research, 29(1):387–415, 1991.

X. Shao, X. Li, L. Gao, and C. Zhang. Integration of process planning and scheduling -

a modified genetic algorithm-based approach. Computers & Operations Research, 36

(6):2082–2096, 2009.

M. J. Sims. An introduction to planning and scheduling with simulation. In Winter

Simulation Conference Proceedings, pages 67–69. IEEE, 1997.

O. Sobeyko and L. Monch. Grouping genetic algorithms for solving single machine

multiple orders per job scheduling problems. Annals of Operations Research, 235(1):

709–739, 2015.

O. Sobeyko and L. Monch. Heuristic approaches for scheduling jobs in large-scale flexible

job shops. Computers & Operations Research, 68:97–109, 2016.

O. Sobeyko and L. Monch. Integrated process planning and scheduling for large-scale

flexible job shops using metaheuristics. International Journal of Production Research,

55(2):392–409, 2017.

K. Sourirajan and R. Uzsoy. Hybrid decomposition heuristics for solving large-scale

scheduling problems in semiconductor wafer fabrication. Journal of Scheduling, 10(1):

41–65, 2007.

N. Sugimura, R. Shrestha, Y. Tanimizu, and K. Iwamura. A study on integrated process

planning and scheduling system for holonic manufacturing. Springer, 2007.

Page 155: Integrated Process Planning and Scheduling in Flexible Job

Bibliography 136

W. Tan and B. Khoshnevis. Integration of process planning and scheduling—a review.

Journal of Intelligent Manufacturing, 11(1):51–63, 2000.

Y.-L. Tsai, C.-F. You, J.-Y. Lin, and K.-Y. Liu. Knowledge-based engineering for

process planning and die design for automotive panels. Computer-Aided Design and

Applications, 7(1):75–87, 2010.

J. Van Belle, J. Philips, O. Ali, B.S. Germain, H. Van Brussel, and P. Valckenaers. A

service-oriented approach for holonic manufacturing control and beyond. Studies in

Computational Intelligence, 402(1):1–20, 2012.

H. Van Brussel, J. Wyns, P. Valckenaers, L. Bongaerts, and P. Peeters. Reference

architecture for holonic manufacturing systems: PROSA. Computers in Industry, 37

(3):255–274, 1998.

A. P. J. Vepsalainen and T. E. Morton. Priority rules for job shops with weighted

tardiness costs. Management Science, 33(8):1035–1047, 1987.

S. Voß and D. L. Woodruff. Introduction to computational optimization models for

production planning in a supply chain (2. ed.). Springer, 2006.

M. Wall. GAlib: A C++ library of genetic algorithm components. Mechanical

Engineering Department, Massachusetts Institute of Technology, 87:54, 1996.

L. Wang, S. Keshavarzmanesh, H. Y. Feng, and R. O. Buchal. Assembly process planning

and its future in collaborative manufacturing: a review. International Journal of

Advanced Manufacturing Technology, 41(1):132–144, 2009.

B. Waschneck, T. Altenmuller, T. Bauernhansl, and A. Kyek. Production scheduling in

complex job shops from an Industry 4.0 perspective: a review and challenges in the

semiconductor industry. In Roman Kern, Gerald Reiner, and Olivia Bluder, editors,

SAMI@iKNOW, volume 1793 of CEUR Workshop Proceedings. CEUR-WS.org, 2016.

A. Weintraub, D. Cormier, T. Hodgson, R. King, J. Wilson, and A. Zozom. Scheduling

with alternatives: a link between process planning and scheduling. IIE Transactions,

31(11):1093–1102, 1999.

T. N. Wong, C. W. Leung, K. L. Mak, and R. Y. K. Fung. An agent-based negotiation

approach to integrate process planning and scheduling. International Journal of

Production Research, 44(7):1331–1351, 2006.

D. Wu and M. Ierapetritou. Hierarchical approach for production planning and

scheduling under uncertainty. Chemical Engineering and Processing, 46(11):1129–

1140, 2007.

Y. Yang, S. Kreipl, and M. Pinedo. Heuristics for minimizing total weighted tardiness

in flexible flow shops. Journal of Scheduling, 3(2):89–108, 2000.

Page 156: Integrated Process Planning and Scheduling in Flexible Job

Bibliography 137

H.-C. Zhang and M. E. Merchant. IPPM–a prototype to integrate process planning

and job shop scheduling functions. CIRP Annals-Manufacturing Technology, 42(1):

513–518, 1993.

R. Zhang and C. Wu. A hybrid approach to large-scale job shop scheduling. Applied

Intelligence, 32(1):47–59, 2010a.

R. Zhang and C. Wu. A divide-and-conquer strategy with particle swarm optimization

for the job shop scheduling problem. Engineering Optimization, 42(7):641–670, 2010b.

R. Zhang, S. K. Ong, and A. Y. C. Nee. A simulation-based genetic algorithm approach

for remanufacturing process planning and scheduling. Applied Soft Computing, 37:

521–532, 2015.

H. Zhou, W. Cheung, and L. C. Leung. Minimizing weighted tardiness of job-shop

scheduling using a hybrid genetic algorithm. European Journal of Operational

Research, 194(3):637–649, 2009.

N. Zribi, I. Kacem, A. El-Kamel, and P. Borne. Minimizing the total tardiness in a

flexible job-shop. In World Automation Congress, 2006.

N. Zribi, I. Kacem, A. El Kamel, and P. Borne. Assignment and scheduling in flexible

job-shops by hierarchical optimization. IEEE Transactions on Systems, Man, and

Cybernetics, Part C (Applications and Reviews), 37(4):652–661, 2007.