Conferences > From 2000
Top image

 
Home
News & announcements
Courses
Management Team
Conferences
Dutch OR Groups
People
Sponsors
Links
Contact
 

PROGRAM

Abstracts presentations phd students




AL-IBRAHIM, ASSIL

Faculty of Econometrics
University of Amsterdam
Roetersstraat 11
1018 WB Amsterdam
a.al-ibrahim@uva.nl

Supervisor
Van der Wal

Title
A semi Markov decision approach for train conflict resolution

Abstract

Railways are a common way of transportation. The passengers that use railways demand an increasing level of service. One of the elements of a high service level is the punctuality of trains. There are two types of delays that can be distinguished. On the one hand there are primary delays that are a consequence of external factors that in many cases cannot be foreseen. An example of such factors is some light malfunction with a train, train driver being late or an overcrowded platform. On the other hand there are secondary delays. Those delays are often caused by the primary delays. In those cases one train blocks the other which results in an (extra) delay for the later one.
This paper addresses the secondary delays. These delays are a consequence of a conflict between two or more trains. The conflict that we consider here is that of a junction. There are two or more disjoint tracks that come together and from a certain point onwards, they have to share what we will call the `destination track'. When two or more trains try to enter the ‘destination track’ at more or less the same time, a decision is to be made about which train to move first. The optimal decision takes into account several factors. We distinguish the speed of the trains, the acceleration rate giving their mass, the urgency of trains, previous decisions and the trains that are still to arrive at the junction.
To solve this problem we use a Semi-Markov Decision (SMDP) approach. An advantage of the SMDP model is that it is able to deal with the dynamics of the train system. A disadvantage is that a description as a SMDP, tends to result in systems with tremendously large state spaces. As will be shown, it is possible to find a balance between these two aspects. By means of simulation, the performance of the SMDP model is compared to that of simple rules like FIFO, priority to passenger trains and some other rules. \


Back to home page Program Parallel sessions


BIJVANK, MARCO

Department of Mathematics
Vrije Universiteit Amsterdam
De Boelelaan 1081a
1081 HV Amsterdam
mbijvank@few.vu.nl

Supervisor
Koole

Title
Managing decentral inventories at hospitals

Abstract

We consider the problem of inventory management for medical items at hospitals. The objective is to find a balance between desired service and inventory levels. In this way high-quality health care can be provided at all times and inventory costs and storage spaces can be limited. We develop a lost-sales service-based (R,s,Q) inventory model with a capacity constraint to represent the replenishment problem at hospitals. To overcome the problem of exhaustive omputation times we construct a heuristic approach and a spreadsheet-based inventory rule to determine near optimal values for the safety stock and the order quantity. The spreadsheet-based inventory rule provides solutions for hospitals that differ less than 1% from optimal solutions. Numerical experiments and a case study clearly demonstrate both the accuracy and the applicability of our approach.


Back to home page Program Parallel sessions


BROEK, JOHN VAN DEN

Department of Mathematics and Computer Science
Eindhoven University of Technology
P.O. Box 513
5600 MB Eindhoven
j.j.j.v.d.broek@tue.nl

Supervisors
Hurkens, Woeginger

Title
Timetabling problems at the TU Eindhoven

Abstract

The students of the Industrial Design department at the TU Eindhoven are allowed to design part of their curriculum by selecting courses from a huge course pool. They do this by handing in ordered preference lists with their favorite courses or the forthcoming time period. Based on this information (and on many other constraints), the department hen assigns courses to students. Until recently, the assignment was computed by human schedulers who used a quite straightforward greedy approach. In 2005, however, the number of students increased substantially, and as consequence the greedy approach did not yield acceptable results anymore.
We present the solution of this real-world timetabling problem: we give a complete mathematical formulation of it, and we explain all the constraints resulting from the situation in Eindhoven. We solve the problem using lexicographical optimization with four subproblems. For all four subproblems, an elegant integer linear programming model is given which easily can be put into CPLEX. Finally, we report on our computational experiments and results around the Eindhoven real-world data.


Back to home page Program Parallel sessions


BYRKA, JAROSLAW

Center for Mathematics and Computer Sciende (CWI)
Kruislaan 413
1098 SJ Amsterdam
j.byrka@cwi.nl

Supervisor
Aardal

Title
An optimal bifactor approximation algorithm for the metric uncapacitated facility location problem

Abstract

We consider the metric uncapacitated facility location problem (UFL). In this paper we modify the (1+2/e)-approximation algorithm of Chudak and Shmoys to obtain a new (1.6774, 1.3738)-approximation algorithm for the UFL problem. Our linear programing rounding algorithm is the first one that touches the approximability limit curve established by Jain et al.
As a consequence, we obtain the first optimal approximation algorithm for instances dominated by connection costs. Our new algorithm - when combined with a (1.11,1.7764)-approximation algorithm proposed by Jain, Mahdian and Saberi, and later analyzed by Mahdian, Ye and Zhang - gives a 1.5- approximation algorithm for the metric UFL problem.
This algorithm improves over the previously best known 1.52-approximation algorithm by Mahdian, Ye and Zhang, and it cuts the gap with the approximability lower bound by 1/3.
Additionally, we show that a single randomized clustering procedure could be used instead of the greedy clustering used in the algorithms of Shmoys et al., Chudak et al., Sviridenko, and in the current paper.


Back to home page Program Parallel sessions


COENEN, TOM

Faculty of EEMCS
University of Twente
P.O.Box 217
7500 AE Enschede
t.j.m.coenen@utwente.nl

Supervisors
Boucherie/van den Berg

Title
An upper bound on multi-hop wireless network performance

Abstract
Given a placement of wireless nodes in space and a traffic demand between pairs of nodes, can these traffic demands be supported by the resulting network? A key issue for answering this question is wireless interference between neighbouring nodes including self interference along multi-hop paths. This paper presents a generic model for sustainable network load in a multi-hop wireless network under interference constraints, and recasts this model into a multicommodity flow problem with interference constraints. Using Farkas' Lemma, we obtain a necessary and sufficient condition for feasibility of this multicommodity flow problem, leading to a tight upper bound on network throughput. Our results are illustrated by examples.

Back to home page Program Parallel sessions

CREMERS, MARLOES

Faculty of Economics
Department of Econometrics
University of Groningen
P.O. Box 800
9700 AV Groningen
m.l.a.g.cremers@rug.nl

Supervisors
Klein Haneveld/Van der Vlerk

Title
A day-ahead paratransit planning problem

Abstract
We consider a dynamic planning problem for the transport of elderly and disabled people. The focus is on a decision to take one day ahead when only part of the requests is known: which requests to serve with own vehicles, and which requests to subcontract to taxis? The developed model is a non-standard two-stage integer recourse model. Both stages consist of two parts: requests are clustered into routes, and these routes are assigned to vehicles.
To solve this model, a genetic algorithm approach is used. Computational results are presented for randomly generated, semi-realistic data sets. In order to solve real-life problems the algorithm needs to become faster. We explain how we have increased the speed of the algorithm while decreasing the quality of the solution only marginally, and present some results.
Furthermore, changing the simplifying but unrealistic assumption about the arrival of the late requests leads to a different model. We briefly comment on this model which integrates ideas of stochastic programming and online optimization.


Back to home page Program Parallel sessions


GONG, YEMING

Department of Management and Innovation
Erasmus University
P.O. Box 1738
3000 DR Rotterdam
ygong@rsm.nl

Supervisor
De Koster

Title
The multi-location transshipment problem with positive replenishment lead times

Abstract
Transshipments, monitored movements of material at the same echelon of a supply chain, represent an effective pooling mechanism. With a single exception, research on transshipments overlooks replenishment lead times. The only approach for two-location inventory systems with non-negligible lead times could not be generalized to a multi-location setting, and the proposed heuristic method cannot guarantee to provide optimal solutions. This paper uses simulation optimization by combining LP/network flow formulation and infinitesimal perturbation analysis to examine the multi-location transshipment problem with positive replenishment lead times, and demonstrates how to obtain the optimal base stock quantities with sample path optimization. From a methodological perspective, this paper uses an elegant duality-based gradient computation method to improve computational efficiency. In test problems, our algorithm was also able to achieve better objective values than an existing algorithm.
Key words: Transshipment, Simulation Optimization, Infinitesimal Perturbation Analysis (IPA)


Back to home page Program Parallel sessions


HAIJEMA, RENÉ

Faculty of Science and Econometrics
Department of Quantitative Economics
University of Amsterdam
Roetersstraat 11
1018WB Amsterdam
r.haijema@uva.nl

Supervisor
Van der Wal

Title
Control of traffic lights by decomposition of Markov decision problem

Abstract

Optimal control of traffic lights, such that the overall average waiting time per car is minimized, appears to be quite complicated. For simple cases of a single intersection in isolation, truly optimal results can be obtained by (numerically) solving a Markov Decision Problem (MDP), for instance by value iteration. For more complex infrastructures, the MDP becomes intractable due to the curse of dimensionality in the state space. We suggest a one-step policy improvement approach based on the relative values of a (near-) optimal fixed cycle. In a fixed cycle the order in which the queues are served as well as the duration of the green periods of each flow are fixed. Since the time to spend at a queue does not depend on the length of the other queues, (numerical) computation of the (relative) value s is done flow by flow. Thus one can decompose the value of a state of the intersection into values for each queue.
Because of the periodicity of the fixed cycle, the computation of relative values is not straightforward. We discuss the right relative value concept for periodic problems. Using those relative values we develop simple rules that seem to perform close to optimal. Key idea of our relative value based rules is to interrupt the fixed cycle, by shortening or lengthening green periods and when all signals show red to prescribe which queue to serve next. We consider several relative value based controlling mechanisms and compared them to the underlying optimal fixed cycle, some simple fully actuated rules and (if tractable) to the optimal (MDP) policy. If time permits we also address how these rules can be extended to deal with arrival information, and extensions to networks of intersections.


Back to home page Program Parallel sessions


HENDRIKSEN, CONNO

Department of Operational Methods for Production and Logistics
University of Twente
P.O.Box 217
7500 AE Enschede
c.j.h.hendriksen@bbt.utwente.nl

Supervisor
Van Harten

Title
Optimizing spare part inventory control for service contracts

Abstract
For the end users of capital goods, such as MRI scanners, defense systems onboard navy vessels, or power plants, the availability is a crucial performance indicator. Downtime of such equipment may have serious (financial) consequences, e.g. in terms of loss of production, quality reduction in health care or ineffective military missions. Increasingly, the users of capital goods as mentioned above do not consider the system upkeep (preventive and corrective maintenance, spare part provisioning) as their core activity. Therefore, some of them try to make the supplier responsible for the system upkeep by agreeing on a service contracts with a certain Service Level Agreement (SLA). The penalties for not meeting the agreed service levels are usually determined by measuring the service level in a given period (month, year).
However, the spare part inventory levels are usually determined based on the (long term) average service level.
Our goal is to gain insight in the behavior of the short term service levels in relation to spare part stock levels and develop quantitative methods to optimize the short term service levels given a restricted budget to invest in spare parts.This insight will help suppliers to manage their spare part inventory control for service contracts.

Back to home page Program Parallel sessions

IERSEL, LEO VAN

Department of Mathematics and Computer Science
Eindhoven University of Technology
P.O. Box 513
5600 MB Eindhoven
l.j.j.v.iersel@tue.nl

Supervisors
Keijsper/Stougie

Title
Resolving ambiguity in genetical data

Abstract

The DNA of so called diploid organisms, for example humans, consists of pairs of chromosomes. These chromosomes can be seen as strings over the alphabet {A,C,G,T}. Although it is possible to obtain the data from both chromosomes of a pair in separate strings, this is still very expensive and time-consuming. It is much easier and faster to obtain a string with the conflated data of both chromosomes. However, this string (the genotype) does not capture all the information from the strings of the two chromosomes (the haplotypes). It merely describes, for each site, the two letters the chromosomes have at that site. But it does not say which letter is on which chromosome. This data is thus ambiguous and an important problem in computational biology is to resolve this ambiguity without the expensive process of sequencing the two chromosomes of a pair separately. The well studied “parsimony” model considers genotypes of a population and its objective is to find the smallest set of haplotypes resolving all the genotypes. This problem is known to be NP-hard in general, but we consider different restrictions and show polynomial time algorithms for some of them and give approximation results for others. We extend our results to the Minimum Perfect Phylogeny Problem that asks for the minimum number of haplotypes that resolve all the genotypes but can also be embedded as the leaves of a phylogenetic tree.


Back to home page Program Parallel sessions


IVANOV, IVAN

Delft University of Technology
Mekelweg 4
2628 CD Delft
i.d.ivanov@ewi.tudelft.nl

Supervisors
Roos/De Klerk

Title
Parallel implementation of a semidefinite optimization solver on a distributed memory cluster.

Abstract
In this talk we present the algorithmic framework and practical aspects of implementing a parallel version of a primal-dual semidefinite programming solver based on the CSDP solver on a distributed memory computer (Beowulf) cluster using a message passing interface (MPI) and the ScaLAPACK library. A new feature is implemented to deal explicitly with problems that have rank-one constraint matrices. As a result the software requires less memory storage for those typically dense matrices and smaller size of the input files. Compared to the other distributed memory primal-dual interior-point SDP solvers a speedup is achieved by means of reducing the time to read and save the data matrices into the memory of the computer and to form the Schur complement matrix.


Back to home page Program Parallel sessions


JANSSEN, ELLEKE

Department of Operations Research and Game Theory
Tilburg University
P.O. Box 90153
5000 LE Tilburg
e.janssen_1@uvt.nl

Supervisors s
Den Hertog/Strijbos

Title

Assessing the effects of using demand parameters estimates in inventory control

Abstract
Inventory models need some specification of the distribution of demand in order to find the optimal order-up-to level or reorder point. This distribution is unknown in real life and there are several solutions to overcome this problem. One approach is to assume a distribution, estimate its parameters and replace the unknown demand parameters by these estimates in the theoretically correct model. Earlier research suggests that this approach will lead to underperformance, even if the true demand distribution is indeed the assumed one. This paper directs the cause of the underperformance and quantifies it in case of normally distributed demand. Furthermore the formulae for the order-up-to levels are corrected analytically where possible and otherwise by use of simulation and linear regression. Simulation shows that these corrections improve the attained performance.
Keywords: Unknown Demand Parameters, Inventory Control, Normal Distribution, Service Level Criterion


Back to home page Program Parallel sessions


KORTEWEG, PETER

Department of Mathematics and Computer Science
Eindhoven University of Technology
P.O. Box 513
5600 MB Eindhoven
p.korteweg@tue.nl

Supervisor
Stougie

Title
Aggregation that makes sense

Abstract

A sensor network consists of sensing devices which may exchange data through wireless communication; sensor networks are highly energy constrained since they are usually battery operated. Data aggregation is a possible way to save energy consumption: nodes may delay data in order to aggregate them into a single packet before forwarding them towards some central node (sink). However, many applications impose constraints on delivery times; this translates into latency constraints for data arriving at the sink.
We study the problem of data aggregation to minimize maximum energy consumption under latency constraints on sensed data delivery and we assume unique transmission paths that form a tree rooted at the sink. We prove that the off-line problem is strongly NP-hard and we design a 2-approximation algorithm. The latter uses a novel rounding technique. Almost all real life sensor networks are managed on-line by simple distributed algorithms in the nodes. In this context we consider both the case in which sensor nodes are synchronized or not. We consider distributed on-line algorithms and use competitive analysis to assess their performance. We show that our algorithm for the synchronized model is best possible up to a multiplicative constant.


Back to home page Program Parallel sessions


MIRETSKIY, DENIS

Faculty of EEMCS
University of Twente
P.O.Box 217
7500 AE Enschede
d.miretskiy@utwente.nl

Supervisor
Scheinhardt

Title
Efficient simulation of a tandem queue with server slow-down

Abstract
Tandem Jackson networks, and more sophisticated variants, have found widespread application in various domains. One such a variant is the tandem queue with server slow-down, in which the server of the upstream queue reduces its service speed as soon as the downstream queue exceeds some prespecified threshold, to provide the downstream queue some sort of 'protection'. This paper focuses on the overflow probabilities in the downstream queue. Owing to the Markov structure, these can be solved numerically, but the resulting system of linear equations is usually large. An attractive alternative could be to resort to simulation, but this approach is cumbersome due to the rarity of the event under consideration. A powerful remedy is to use importance sampling, i.e., simulation under an alternative measure, where unbiasedness of the estimator is retrieved by weighing the observations by appropriate likelihood ratios. To find a good alternative measure, we first identify the most likely path to overflow. For the normal tandem queue (i.e., no slow-down), this path was known, but we develop an appealing novel heuristic, which can also be applied to the model with slow-down. Then the knowledge of the most likely path is used to devise importance sampling algorithms, both for the normal tandem system and for the system with slowdown. Our experiments indicate that the corresponding new measure is sometimes asymptotically optimal, and sometimes not. We systematically analyze the cases that may occur.

Back to home page Program Parallel sessions

NICOLAI, ROBIN

Econometric Institute
Erasmus University
P.O. Box 1738
3000 DR Rotterdam
rnicolai@few.eur.nl

Supervisor
Dekker

Title
A general framework for statistical inference on discrete event systems

Abstract
We present a framework for statistical analysis of discrete event systems. This framework combines tools such as simulation of marked point processes, likelihood methods, kernel density estimation and stochastic approximation, and is applicable even if conventional approaches fail due to the mathematical intractability of the discrete event system. The approach is illustrated with an application to modelling and estimating corrosion of steel gates in the Dutch Haringvliet storm surge barrier.


Back to home page Program Parallel sessions


OZDEMIR, OZGE

Department of Operations Research and Game Theory
Tilburg University
P.O. Box 90153
5000 LE Tilburg
o.ozdemir@uvt.nl

Supervisors
Gurkan/Den Hertog

Title
Optimal threshold levels in stochastic fluid models via simulation-based optimization

Abstract
A number of important problems in production and inventory control involve optimization of multiple threshold levels or hedging points. We address the problem of finding such levels in a stochastic system whose dynamics can be modelled using generalized semi-Markov processes (GSMP). The GSMP framework enables us to compute several performance measures and their sensitivities from a single simulation run for a general system with several states and fairly general state transitions. We then use a simulation-based optimization method, sample-path optimization, for finding optimal hedging points. We report numerical results for systems with more than twenty hedging points and service-level type probabilistic constraints. In these numerical studies, our method performed quite well on problems which are considered very difficult by current standards. Some applications falling into this framework include designing manufacturing flow controllers, using capacity options and subcontracting strategies, and coordinating production and marketing activities under demand uncertainty.


Back to home page Program Parallel sessions


PAULUS, JACOB JAN

Department of Applied Mathematics
University of Twente
P.O.Box 217
7500 AE Enschede
j.j.paulus@ewi.utwente.nl

Supervisor
Hurink

Title
Online scheduling of parallel jobs on two machines is 2 competitive

Abstract
We will show that for the problem of online scheduling of parallel jobs on two parallel machines there exists no online algorithm with competitive ratio strictly less than 2. In this problem jobs arrive one by one and are characterized by their processing time and the number of machines simultaneously required for processing (1 or 2 machines). As soon as a job arrives, it has to be scheduled irrevocably without knowing the characteristics of the future jobs. Preemption is not allowed and the objective is to minimize the makespan.
For the evaluation of an online algorithm ON , we use competitive analysis. For any sequence of jobs S we compare the makespan of the online algorithm C(ON) with the makespan of the optimal offline solution C(OPT). An online algorithm is said to be ρ-competitive if supS C(ON)/C(OPT) ≤ ρ.
To begin with, we point out that a greedy algorithm which schedules the jobs upon arrive as early as possible, has a competitive ratio of 2. This is not hard to see, since the total time a machine is left idle by such a greedy algorithm, is smaller than the makespan of the optimal offline solution. To prove that there is no online algorithm with competitive ratio strictly less than 2, we construct an adversary job sequence in which jobs have alternate machine requirement of 1 and 2. When the number of jobs in the adversary sequence goes to infinity, we will show that no online algorithm can have competitive ratio strictly less than two. Therefore, the greedy algorithm is the best one can do for this online problem with only 2 machines.

Back to home page Program Parallel sessions

RAI, VIVEK

Department of Mathematics
Vrije Universiteit Amsterdam
De Boelelaan 1081a
1081 HV Amsterdam
vivekr@few.vu.nl

Supervisor
Koole

Title
A multiphased approach for modeling and analysis of the BitTorrent

Abstract

BitTorrent is one of the most popular protocols for content distribution and accounts for more than 15% of the total Internet traffic. In this paper, we present an analytical model of the protocol. Our work differs from previous works as it models the BitTorrent protocol specifically and not as a general file-swarming protocol. In our study, we observe that to accurately model the download process of a BitTorrent client, we need to split this process into three phases. We validate our model using simulations and real-world traces. Using this model, we study the efficiency of the protocol based on various protocol-specific parameters such as the maximum number of connections and the peer set size.
Furthermore, we study the relationship between changes in the system parameters and the stability of the protocol. Our model suggests that the stability of BitTorrent protocol depends heavily on the number of pieces a file is divided into and the arrival rate of clients to the network.


Back to home page Program Parallel sessions


RENNEN, GIJS

Department of Operations Research and Game Theory
Tilburg University
P.O. Box 90153
5000 LE Tilburg
g.rennen@uvt.nl

Supervisor
Den Hertog

Title
Upper bounds on two-dimensional l2-maximin Latin hypercube designs

Abstract
Latin hypercube designs form a class of designs that are often used for finding an approximation of deterministic computer simulation models on a box-constrained domain. This type of simulation models is often used in engineering, logistics and finance to analyse and optimize the design of products or processes The reason for approximating these models is that a computer simulation run is usually quite time-consuming to perform. This makes the model impractical when it comes to obtaining insight in the underlying process or in optimizing its parameters. A common approach to overcome this problem is to determine a meta-model that approximates the relation between the input and output parameters of the computer simulation model. Such a meta-model is based on the information obtained from a limited number of simulation runs. The quality of the meta-model depends, among others, on the choice of the simulation runs. A set of simulation runs is called a design.
Latin hypercube designs are a particular class of designs which satisfy the important non-collapsingness criterion. Another important criterion is space-fillingness which means that the whole design space should be well-represented by the design points. To accomplish this, we consider the maximin criterion which states that the points should be chosen such that the minimal distance between any two points is maximal. Finding two-dimensional maximin LHDs can however also be time-consuming for large numbers of design points. Therefore, most results in this field concern approximate maximin LHDs, although real maximin LHDs are found for some cases.
In this presentation, I will present upper bounds for the separation distance of two-dimensional l2-maximin Latin Hypercube Designs. These bounds are useful for assessing the quality of approximate maximin LHDs by comparing their separation distances with the corresponding upper bounds.


Back to home page Program Parallel sessions


SCHAKEL, LOLKE

Faculty of Economics
University of Groningen
P.O. Box 800
9700 AV Groningen
l.p.schakel@rug.nl

Supervisors
Sierksma/Gosh

Title
Constructing an interferometric array design from the missing set of baseline vector coordinates

Abstract
Modern radio telescopes are interferometer arrays, which are radio telescopes consisting of two or more dish antennas or sensor stations. The design of an interferometer array requires the optimization of a large set of design variables, among others, the sensitivity, the resolution, the imaging reliability, the system reliability, and the system cost (i.e., antennae and cabling costs). In this talk we address the imaging reliability of the system, being the reliability of the radio source images produced by the system, and the system cost. We present a semi-greedy heuristic that determines an optimal array design with a minimum number of antennas for a given imaging reliability. We further minimize the number of antennas by excluding those that are redundant. The minimization is done by complete enumeration after the application of reduction techniques. We apply our new method to the LOFAR telescope which is a wide-area multi-sensor radio array currently built in the Northern Netherlands. The obtained results will be compared to the nominal design of LOFAR in order to test the effectiveness and applicability of the method.


Back to home page Program Parallel sessions


STREUTKER, MATTHIJS

Faculty of Economics
Department of Econometrics
University of Groningen
P.O. Box 800
9700 AV Groningen
m.h.streutker@rug.nl

Supervisors
Klein Haneveld/Van der Vlerk

Title
ALM modeling for dutch pension funds: Indexation and new regulatory rules

Abstract
We describe a multistage recourse ALM model for pension funds, which utilizes defined benefit contracts and is tailored to the Dutch situation. The model deals with the developments currently faced by the pension funds. The focus is on the modeling side of the problem. Pension fund management in the Netherlands is on the move. Years of apparent stability abruptly ended with the severe drop of the stock indices at the beginning of the new millennium. This unrest, also fed by the ageing discussion, encouraged a broadly conducted debate on the Dutch pension system. The first outcomes are a shift from final to average earnings schemes and the development of new regulation for pension funds. With the shift to average earnings schemes, the indexation decisions (correction of pension rights for inflation) of the pension fund board become very important, since solid indexation is indispensable for a good pension in an average earnings scheme. The new regulation, which is called the Financieel Toetsingskader (FTK) and is scheduled to be implemented in January 2007, is based on risk based supervision and shows strong affinity with the Solvency and Basel Accords. The main features of the model are the accurate and yet compact modeling of indexation decisions, the implementation of the FTK in the model, and an economically interpretable objective function.
Key words: ALM, stochastic programming, pension, indexation.


Back to home page Program Parallel sessions


TENNEKES, MARTIJN

Department of Mathematics
Maastricht University
P.O. Box 616
6200 MD Maastricht
martijn.tennekes@math.unimaas.nl

Supervisor
Vrieze

Title
Network formation on directed networks

Abstract

We study a network formation game based on the one-way flow model by Bala and Goyal (2000}. In this model, a set of agents is connected through a directed network by which they are able to access each other's value of information. The game that we study starts with some initial network. Sequentially, agents are able to modify the current network by adding or deleting links to other agents. A payoff function is defined as a function of the network. We consider the following payoff function: agents gain some profit for every connected agent and pay some costs for every link they added. The game ends when the current network is Nash. We are interested in several properties of this game, like the characterization of Nash networks and whether this game converges or not. Our main result is that this game always converges when the payoff function satisfies certain properties.
Literature: V. Bala and S. Goyal: A non-cooperative model of network formation, Econometrica 68 {2000}, 1181-1229.

Back to home page Program Parallel sessions

VOLKOVICH, YANA

Faculty of EEMCS
University of Twente
P.O.Box 217
7500 AE Enschede
y.volkovich@utwente.nl

Supervisors
Litvak/Boucherie

Title
In-Degree and PageRank of Web pages: Why do they follow similar power laws?

Abstract
The PageRank is a popularity measure designed by Google to rank Web pages. Experiments confirm that the PageRank obeys a `power law' with the same exponent as the In-Degree. This paper presents a novel mathematical model that explains this phenomenon. The relation between the PageRank and In-Degree is modelled through a stochastic equation, which is inspired by the original definition of the PageRank, and is analogous to the well-known distributional identity for the busy period in the $M/G/1$ queue. Further, we employ the theory of regular variation and Tauberian theorems to analytically prove that the tail behavior of the PageRank and the In-Degree differ only by a multiplicative factor, for which we derive a closed-form expression. Our analytical results are in good agreement with experimental data.

Back to home page Program Parallel sessions

WEIJ, WEMKE VAN DER

Center for Mathematics and Computer Sciende (CWI)
Kruislaan 413
1098 SJ Amsterdam
weij@cwi.nl

Supervisor
Van der Mei

Title
Performance optimization of Web servers

Abstract

The rise of Internet and broadband communication technology have boosted the use of communication services that combine and integrate information from geographically distributed information systems. Consequently, application servers are expected to handle strongly increasing numbers of requests, while meeting strict response-time requirements. Examples of such servers are Web servers, file servers, and database management systems. In many applications, requests typically consist of a sequence of intermediate processing steps. To handle incoming requests, application servers are typically equipped with a pool of threads where each processing step is assigned to a single thread at a time. Such applications are modelled as queueing networks. We focus on models with shared resources and consider the issue of congestion of work. In this context, we look at dynamic thread allocations to guarantee performance metrics. To validate the model, we have tested the performance of our policies in an experimental setting on an Apache web server. The experimental results show that our dynamic thread policies lead to significant reductions of the response time, which demonstrates the practical usefulness of the results.


Back to home page Program Parallel sessions


WINANDS, ERIK

Department of Mathematics and Computer Science
Eindhoven University of Technology
P.O. Box 513
5600 MB Eindhoven
e.m.m.winands@tue.nl

Supervisors
Boxma/De Kok

Title
Branching-type polling systems with large setups

Abstract

The present paper considers the class of polling systems that allow a multi-type branching process interpretation. This class contains the classical exhaustive and gated policies as special cases. We present an exact asymptotic analysis of the delay distribution in such systems, when the setup times tend to infinity. The motivation to study these setup time asymptotics in polling systems is based on the specific application area of base-stock policies in inventory control. Our analysis provides new and more general insights into the behavior of polling systems with large setup times.


Back to home page Program Parallel sessions


YUAN, TAOYING

Department of Econometrics
Vrije Universiteit Amsterdam
De Boelelaan 1081a
1081 HV Amsterdam
tyuan@feweb.vu.nl

Supervisor
Heidergott

Title
Gradient estimator of a multi-component maintenance system using measure-valued differentiation

Abstract

Individuals and companies depend more and more on complex systems for their daily functioning. The potential costs of breakdowns of these systems are quite high, so the maintenance of these systems is crucial in order to increase their availability, and the following age policy is widely used in practice. Suppose the system consists of J > 1 components that are prone to failure. When a component fails, it is replaced immediately, and all components older than a threshold age θ are preventively replaced. With each maintenance action, such as replacement after failure or preventive replacement, costs are associated.
We derive a new gradient estimator based on the measure-valued differentiation approach for the derivative of the long-run average cost with respect to θ. The performance of the MVD estimator is compared with that of the known Finite Difference estimator (brute force simulation). We show that the MVD estimator yields confidence intervals of the same width as the FD estimator but in considerably less computation time. We also show that the MVD estimator can be applied to systems with a large number of components while the FD method is computationally not feasible. Furthermore, we show that when we apply our MVD estimator to a standard stochastic optimization routine, the convergence is twice as fast as when the FD estimator is used. The MVD estimator presented in this paper is easy to implement and considerably increases the efficiency of a Robins-Monro type of stochastic approximation algorithm.

Back to home page Program Parallel sessions