Skip to content

Examples

Alexander Herzog edited this page Oct 1, 2024 · 10 revisions

Example models

There are many built-in example models in Warteschlangensimulator. The example models will explain how the different station types work. You can load any example directly from the file menu: Examples sub menu in the file menu

Model generator

Additionally there is a model generator (also available from the file menu; just the next menu item) which will create complete queueing models automatically for you. All you need to do is to specify some basic properties: number of client sources, number of process station, type of workload (low, medium, heavy), queueing discipline (FIFO, LIFO, random) etc.

List of all example models available in Warteschlangensimulator

Erlang C comparison model

Erlang C comparison model

This model represents a simple M/M/1 system. The inter-arrival times and the service times are exponentially distributed; there is one operator. The choice of E[I]=100 seconds and E[S]=80 seconds results in an operator utilization of 80%. The simulation results can be compared directly with the analytical values of the Erlang C formula.


Simple start model

Simple start model

This model can be used as a starting point for your own initial experiments with Warteschlangensimulator verwendet werden.


Call center model

Call center model

This example shows how a complex call center system consisting of several caller groups, several sub-call centers, impatience, repeaters and forwarding can be modeled in Warteschlangensimulator.


Restaurant as queueing system

Restaurant as queueing system

This example model illustrates the behavior of guests in a restaurant. It looks at the steps of food selection, preparation, serving, consumption and payment. Different resources are required for each of the different steps.


Construction site traffic lights

Construction site traffic lights

A construction site traffic light that controls traffic through a single-lane area is an example of campaign production: The longer the respective traffic light phases are, the less often it is necessary to switch between directions and the more time can ultimately be used productively. However, the longer arriving cars have to wait on average until the traffic light for their direction turns green.


Patient treatment at the doctor's office

Patient treatment at the doctor's office

In this example, the processes in a hospital emergency department are simulated. Patients are treated by doctors and nurses who are occupied for different lengths of time with one treatment each. The performance indicators of the system could be improved by changing the load balancing.


Multi-stage production

Multi-stage production

This example model depicts a multi-stage production process consisting of pre-processing, main process and post-processing. Different distributions for the service times are used at the various process stations.


Client types with different priorities

Client types with different priorities

In this example model, clients of two different types arrive at a service station. In principle, clients are served according to the FIFO principle, i.e. according to the longest waiting time. However, clients of type B, who are to be prioritized higher than clients of type A, are given a virtual head start in the queue. This head start can be configured in the model; the effects in the form of the difference of the average waiting times by client type are displayed as a diagram.


Queue with impatient clients and retry

Queue with impatient clients and retry

In this example model, client who are only willing to wait a limited amount of time are mapped. Each client has an individual waiting time tolerance (according to a waiting time tolerance distribution). If this is exceeded while waiting in the queue in front of the service station, the client leaves the station without being served and may start a new attempt to be served later.


Resource shared between multiple stations

Resource shared between multiple stations

In the analytical queueing theory, the operators and their stations form a unit. In practice, however, an operator can be deployed at several process stations. This is illustrated in this simulation model: There are two process stations and two operators, each of whom can move freely between these stations and always go next to the station where the most clients are waiting.


Limited number of clients at a station

Limited number of clients at a station

In this simple example model, both read and write access to a user-defined variable during the simulation is shown.


Operators as simulation objects

Operators as simulation objects

Normally, only the clients move as objects through the queueing network defined as a flow chart. The operators - even if they can work at different stations - do not appear as independent objects. In this example model, however, the operators are also modeled as independent objects. Specifically, the model consists of an open queueing network for the clients and a closed system in which the operators move.


Transport of components between multiple factories

Transport of components between multiple factories

This example model shows a way of modeling the transport of workpieces between different stations. Transports represent a way of transferring objects in the network under consideration that does not require movement along an edge. The transport option allows certain resources to be required for a transport and time durations to be defined for transports.


Transport of components using transporters

Transport of components using transporters

Transporting workpieces using transporters is a modeling option that is particularly advantageous for displaying the model in an animation. It is possible to see directly how the workpieces are moved between the stations using small vehicles. If the transporters are requested at a station, they may have to drive to the station empty first. This means that the modeling largely corresponds to the real behavior of transporting workpieces using vehicles.


Combining orders and warehouse items

Combining orders and warehouse items

This example shows how customer inquiries and manufactured products can be brought together controlled by signals.


Temporary batches

Temporary batches

In this model, temporary batches are formed from several items that are to be shipped together. After the batch objects arrive at the recipient, the batches are split up again and the original items become visible again.


Model with break times for the operator

Model with break times for the operator

This example model shows how operator pause times can be mapped in a simulation. In contrast to the models from analytical queueing theory, it is not a problem in the simulation to map varying numbers of operators at a process station over time.


Machine with set up times

Machine with set up times

This model illustrates campaign production: Clients of two types arrive at the system. If there is a change from one client type to another at the process station, additional set-up time is required. Therefore, the queue is rearranged in an attempt to stay with one client type for as long as possible.


Effects of rework on the residence times

Effects of rework on the residence times

If rework occurs during production, this may significantly increase the workload. If the additional load caused by the rework is not taken into account, this can lead to significantly longer waiting times than actually expected.


Releasing clients based on Javascript

Releasing clients based on Javascript

This example model illustrates how a script station can be used to limit the number of clients in a network segment.


Restricted buffer between the stations

Restricted buffer between the stations

If there is only a limited buffer available in front of the process stations, a certain proportion of incoming clients will be blocked. In this example model, a queueing model consisting of two stations connected in series, each with an (adjustable) limited buffer in front of both stations, is considered. Station A only accepts clients if it is certain that they can be forwarded to station B later, i.e. no clients are rejected within the system. However, they are sorted out directly at the entrance.


Continuous time values

Continuous time values

This model demonstrates the use of continuous-time values in Warteschlangensimulator. Clients pass three stations one after the other at intervals, at each of which the rate of change of a value that changes continuously over time during the simulation is set. The diagram shows the change in the value over time.


Queues with jockeying

Queues with jockeying

"Jockeying" refers to the process when a customer at the very end of a queue notices that their neighboring queue has become significantly shorter and therefore changes queues. In the example model, arriving customers initially select the shorter of two queues. However, if they later notice that the other queue has become shorter than their own, the last customer switches to the shorter queue.


Effect of the queueing discipline

Effect of the queueing discipline

The service order, service in order of arrival (FIFO) or service in reverse order of arrival (LIFO), has no influence on the average waiting times at a station. However, LIFO leads to significantly higher variation of the waiting and the lead times than FIFO.


Shiftplans

Shiftplans

This example model shows the use of shift plans for the operators.


Batch arrivals and batch service

Batch arrivals and batch service

In this model, clients arrive in groups (batch arrivals) and are also served in groups (batch service). Batch arrivals lead to an increase in the variation coefficient of the inter-arrival times. As a result of batch service, it can happen that the operator is idle and while clients still have to wait (as the target batch size has not yet been reached). The effect increases as the service batch size increases. However, this deterioration does not occur evenly, but in stages depending on the ratio of arrival and service batch size.


Effect of the batch size of the transports on the system performance

Effect of the batch size of the transports on the system performance

In this example model, documents are transported in groups between stations. The larger the groups are, the less transport time and associated resource consumption is required for a single document. However, as the group size increases, the lead times also increase, as the documents have to wait more and more frequently until the respective transport batch size is reached.


Interval-dependent inter-arrival times

Interval-dependent inter-arrival times

In this example model, the average inter-arrival time of client changes every three hours between 140 and 85 seconds. With c=1 operator and an average service time of 80 seconds, this results in a utilization that varies between 57% and 94%.


Closed queueing network

Closed queueing network

Most queueing models are open networks: Clients arrive at the system, may be routed to several stations and then leave the system at the end. In a closed queueing network, on the other hand, a finite number of clients circulate. Therefore, no time durations can be recorded on a client basis. However, time durations at the individual stations can be recorded.


Time-controlled service

Time-controlled service

If the clients arrive at the process station directly after their arrival at the system, there are only minimal waiting times for the clients, but the operator has to frequently switch back and forth between operation and idling. In this example model, all arriving clients are initially collected and directed to the process station every 15 minutes on a time-controlled basis. This can then process all clients who have arrived by then (longer working phase) and then has a longer idle phase. This procedure is similar to batch operation; however, clients are not released based on their number, but at fixed intervals.


Waiting time tolerances of successful clients and waiting time cancelers

Waiting time tolerances of successful clients and waiting time cancelers

Waiting cancelation occur particularly in customer service systems. When setting up a simulation model, the average waiting time tolerance of clients has to be mapped in the model. However, only the waiting time tolerance of the clients who cancel waiting can be measured directly - namely their cancelation time. In contrast, only a lower estimate of the waiting time tolerance of the successful clients is known (namely their actual waiting time). The example shows that neither time duration provides a valid estimate of the actual average waiting time tolerance across all clients. This is particularly due to the fact that those who cancel waiting tend to have a short waiting time tolerance and are therefore are not representative for the population as a whole.


Combined open and closed queueing system

Combined open and closed queueing system

null


Queueing system design

Queueing system design

In this example, four service alternatives are compared in which the same operation capacity is available: An service system at which two parallel operators are active, two individual process stations with a 50:50 split of arriving clients at the start of the system, a process station with an operator twice as fast and a process station at which two clients are always served simultaneously (batch operations) are considered. Despite an identical arrival rate and mathematically identical service performance, different values for the average waiting times result.


Queueing system design with control

Queueing system design with control

If a client has to choose one of two queues when entering the system and this is done at random (e.g. because the client cannot see the actual queue lengths), the operator may end up idling at one station while clients have to wait at the other. If the client can see the queue lengths and selects the shorter one, this mitigates the effect. However, it can still occur due to the stochastic service times. The most efficient solution is therefore a shared queue in which clients are only assigned to one or the other station immediately before the service counters. These three control alternatives are considered in this example model.


Push and pull production

Push and pull production

In this model, a two-stage push production is compared with a two-stage pull production. The average number of clients in the two production processes and the throughput in both cases are shown.


Push and pull production with multiple segments

Push and pull production with multiple segments

In this example model, a pull production consisting of three stations is shown. Stock is limited at the stations by pull barriers. A signal is used to control the client source.


Queue length depending process times

Queue length depending process times

Buffers in front of stations can ensure that a station can continue to work even if the previous station fails at short notice. On the other hand, high stock levels can mean that additional effort is required for restacking in order to comply with the FIFO operating sequence, i.e. that the service times increase. In this example model, the effects of small or large buffers can be analyzed.


Workload depending number of operators

Workload depending number of operators

In analytical queueing models, the number of operators is usually constant over the entire runtime. In simulation models, however, shift plans can also be mapped. A shift plan means that the number of operators is controlled according to fixed time specifications. In this example model, however, the number of operators is controlled depending on the current demand. It can be seen that this allows better performance indicators to be realized with less operator performance overall.


Homogenization of the number of clients

Homogenization of the number of clients

This model shows various strategies for homogenizing the arrival flow of clients at a process station and thus reducing the variation of the waiting times. The effectiveness of the different methods can be compared directly.


Serial versus parallel processing

Serial versus parallel processing

In this example model, serial processing is compared with parallel processing. In total, each incoming object is checked by three stations in each of the cases. A certain proportion of the objects are sorted out at each station. In serial processing, this results in a lower workload at the second and third stations. In the parallel case, each object has to be processed by each station (which leads to a higher workload). On the other hand, the processing times are shorter.


Combined FIFO LIFO production

Combined FIFO LIFO production

This model examines how the processing policy (FIFO or LIFO) affects the performance indicators of a system: In the FIFO case, the variance of waiting times is lower than in the LIFO case. However, the model is configured in such a way that the service times in FIFO mode increase with increasing queue length. Therefore, at a certain (configurable) point, the system switches to the faster LIFO mode.


Strategies for avoiding setup times

Strategies for avoiding setup times

In this example model, two production strategies are compared: Clients of two different types arrive at the system. They are served at three process stations, each with one operator. When changing from one client type to another, an additional set-up time is required at the process stations. In the upper model, incoming clients are assigned to the process stations according to the shortest queue length. In the lower model, if several stations are idle, the one for which no set-up time is required is selected if possible.


Lead times versus throughput

Lead times versus throughput

In pull production as illustrated in this model, a new workpiece is only released for processing at an process station if there is sufficient free capacity at the next station. In this example, the capacity of the buffer before station B can be configured. The larger the buffer, the less often the process station is idle and the higher the throughput - but also the longer the lead times for the workpieces.


Economy of scale

Economy of scale

The economy of scale states that a larger system delivers better key performance indicators than a smaller system with the same relative utilization. In the example model, 5 models are compared, which differ only in the average inter-arrival time and the number of operators. The number of operators is increased to the same extent as the average inter-arrival times are reduced, so that a relative utilization of 80% is always maintained. It can be seen that the waiting times in the larger models are shorter than in the smaller models.


Use of central warehouses close to the factory

Use of central warehouses close to the factory

In this example, different delivery strategies for local interim storage facilities are considered. Either the factories deliver the manufactured products immediately to the interim storage facilities (push principle) or only on explicit demand (pull principle). The average time customers have to wait for their orders in both cases is analyzed.


Splitting and joining of (partial) products

Splitting and joining of (partial) products

In some production systems, products are split into several components, which are processed individually and independently and then reassembled with the correct other components of the original product. One example is the repair of a laptop. In this example, various products arrive at the system. These are first given a unique ID and then split into two or three components (which are given the ID of the original product). After processing, the subcomponents are reassembled correctly according to their IDs.


Client and operator types

Client and operator types

In this model, there are two client types (A and B) waiting in a common queue. The clients are served at three service stations: One for type A clients only, one for type B clients only and one for all clients. If several suitable service stations are available, clients are preferentially directed to the specialized service stations (in order to keep the flexible service station as available as possible).


Service order depending on service time

Service order depending on service time

In the default case, the service time at a process station is determined after the client has been removed from the queue. In this case, prioritization by service time is not possible. However, if the service time is already known before the client reaches the queue, it is possible to prioritize according to the shortest or longest service time.


Operators with different speeds at one process station

Operators with different speeds at one process station

On average, a clients arrives at the system every E[I]=60 seconds. There are 4 slow operators (E[S]=300 seconds) and 4 fast operators (E[S]=200 seconds) available. The slow operators may be cheaper per service process and are therefore preferred.


Load differentiation via minimum waiting times

Load differentiation via minimum waiting times

If the utilization of capacity between differently priced process stations is to be differentiated as much as possible, this can be done using appropriate allocation strategies. If this is not sufficient, additional minimum waiting times can be used before the more expensive station can be accessed. In contrast to pure allocation strategies, however, these worsen the client-specific parameters of the system.


Law of large numbers

Law of large numbers

The law of large numbers states that the mean value resulting from the repeated execution of a random experiment (time mean) stabilizes against the expected value of the corresponding probability distribution (spatial mean). In this example model, a discrete uniform distribution with four possible outcomes is depicted. The probability that a client chooses one of the four possible paths stabilizes to 25%.


Galton box

Galton box

In a Galton board, balls repeatedly fall randomly to the left or right. It is used to illustrate the binomial distribution. In this example model, a Galton board is modeled as a flow diagram.


Effects of the variation of the service times

Effects of the variation of the service times

Not only the utilization of a station has an influence on the mean waiting times and queue length, but also on the variation of the service times. The more the service times vary, the worse the system's performance indicators will be at otherwise identical capacity utilization.


Poisson arrivals see time averages

Poisson arrivals see time averages

The principle "Poisson Arrivals See Time Averages" (PASTA) means that if the performance indicators of a system are only recorded at the times of client arrivals (instead of continuously), the same performance indicator will arise as with continuous recording if the inter-arrival times of the clients are subject to the exponential distribution. This situation is visualized in the example model.


Central limit theorem

Central limit theorem

In this example model, each client passes through a delay station 10 times, at which an exponentially distributed waiting time occurs. The histogram of the total waiting times of the clients is shown, i.e. the histogram of the sums of 10 exponentially distributed random numbers. This sum distribution approximates a normal distribution. This is the statement of the central limit theorem.


Hitchhiker's Paradoxon

Hitchhiker's Paradoxon

The bus-stop paradox is a well-known phenomenon from queuing theory. If passengers arrive at random times at a stop where buses whose departure times are controlled by inter-arrival times and not by a fixed timetable arrive, the average waiting time for a passenger is not half the average inter-arrival time of the buses, but is usually longer. The reason for this is that it is more likely for a passenger to arrive during a long interval than during a short interval.


Histograms of different probability distributions

Histograms of different probability distributions

In this model, random numbers are generated via several delay stations according to the uniform distribution, the triangular distribution, the log-normal distribution, the exponential distribution, the gamma distribution and the Weibull distribution. The histograms of the random numbers generated are displayed.

Clone this wiki locally