Abstracts of the PhD presentations
|
Ayse Aslan Supervisor Iris Vis Title Efficient
Planning to Personalize Learning Abstract Education is shifting from traditional one-size-fits-all models
which offer standardized learning paths for everyone in a certain group
(e.g., age, school year) to personalized learning models. Throughout the
world, schools are implementing and experimenting with various personalized
learning models in which students have the freedom to customize their
learning paths and learn in their own-pace with their own-methods (see
[1],[2] and [3]). In personalized learning students are not tied to fixed
groups, instead activity groups are formed flexibly each time. Own-pace and
own-method learning leads to flexible demand-driven planning: students demand
learning activities when they need and schools flexibly form activity groups
to deliver student demands. I study the weekly flexible demand-driven
learning activity planning problem of personalized learning. I present an
integrated MIP model to plan on students and resources simultaneously. In
this model, I exploit flexibility in planning to reduce on decision layers.
However, this exploitation is not sufficient for standard CPLEX MIP solver to
find optimal solutions efficiently to real-life sized problem instances.
Alternatively, I utilize solution polishing heuristic within CPLEX and
present a 2-phase heuristic approach. This approach firstly builds initial
solutions with greedy constructive heuristics and then these solutions are
improved with a dynamic Thompson sampling based local-search selection
hyper-heuristic framework. Selection hyper-heuristic frameworks consist of
two components: heuristic selection and low-level heuristic pool. In my
framework, I integrate the Thompson sampling algorithm (DTS) developed in [4]
for solving dynamic multi-armed bandit problems in heuristic selection
mechanism to intelligently guide the search process. I implement commonly
used low-level heuristics in solving educational timetabling problems and
additionally implement tailor-made heuristics in the pool. I test CPLEX and
heuristic approaches on generated 18 problem instances. Results indicate that
both including tailor-made heuristics and having a DTS selection mechanism
increases improvement performances of our local-search framework. References : [1] www.kunskapsskolan.com [3] A. Kannan, G. van den Berg and A. Kuo, iSchedule to Personalized
Learning, Interfaces 42(5):437-448, 2012. [4] N. Gupta, O. Granmo
and A. Agrawala, Thompson Sampling for Dynamic
Multi-Armed Bandits, 10th International Conference on Machine Learning and
Applications, 2011. Riley Badenbroek Supervisor Etienne de Klerk Title Complexity
Analysis of a Sampling-Based Interior Point Method for Convex Optimization Abstract We study an
interior point method that only assumes a membership oracle of the feasible
region. The barrier of choice, called the entropic barrier, has a natural
interpretation in terms of a family of exponential distributions over the
feasible set. The gradient and Hessian of this barrier can thus be
approximated by Monte Carlo methods such as hit-and-run sampling. This method
is applicable to optimization problems over convex sets where the barrier
function is unknown of not efficiently computable, like the Subtour
Elimination Polytope. Marjolein
Buisman Supervisors Rene Haijema, Jacqueline Bloemhof Title Inventory decisions for vertically differentiated products under stock-out-based substitution Abstract We
consider a retailer selling multiple, vertically differentiated, products.
The retailer has to decide on which, and how much products to stock. The
initial choice of the consumer is based on the product that will result in
the highest utility. The utility function depends on the quality valuation of
the consumer as well as on the product price. When one of the products is out
of stock, the consumer will substitute towards another product, or will leave
the shop without buying a product. Substitution fractions are endogenous
determined and dependent on the utility functions. We developed a heuristic
to quickly find the optimal solution of this complex problem. The heuristic
aggregates the products such that in every run of the heuristic, a
two-product case is solved. Our results show that including of
stock-out-based substitution in the ordering decisions, changes the optimal
order quantities for all products. Moreover, we show that considering
inventory decisions next to the assortment decisions affects the final
assortment. Besides we show the effects of a changed consumer valuation, a
change in the profit margin and the effects of the critical ratio (of the
newsvendor) on the optimal stocking decisions. Rowan Hoogervorst Erasmus University Rotterdam Supervisors Twan
Dollevoet, Dennis Huisman Title An Efficient Heuristic for the Rolling Stock Rescheduling Problem Abstract In this talk, we consider the rolling stock rescheduling problem for a
passenger railway operator. Rolling stock rescheduling is performed in case a
disruption leads to changes in the timetable. In rolling stock rescheduling,
we then assign train units, i.e., rolling stock units, to the trips in this
new timetable. These train units can be of different types and units of
compatible types can be combined into compositions. We aim to assign the
compositions to the trips such that both passenger comfort and operational performance
are taken into account. We present a variable neighborhood
search heuristic to solve the rolling stock rescheduling problem. Our main
search neighborhood relies on fixing the assignment
for all-but-one rolling stock types and then iteratively improves the
assignment for the remaining type. Moreover, we additionally perform a
perturbation step to improve the assignment across the different train unit
types. We apply our heuristic to instances of Netherlands Railways (NS). The
results show that the heuristic is able to find high-quality solutions in a
reasonable amount of time. In particular, this allows rolling stock
dispatchers to use our heuristic in real-time rescheduling. Dylan Huizing CWI Supervisors Guido Schäfer, Rob van der Mei, Sandjai Bhulai Title The Median Routing Problem: routing over
non-urgent jobs under optimal emergency response Abstract The Dutch railway company has a department for Incident Response: whenever
a malfunction or other physical incident occurs on the railway, incident
responders are employed to resolve the issue. Unlike traditional emergency
responders, they can spend any remaining time on non-urgent
(incident-preventing) jobs in the network, such as routine inspections and
crossroad patrols. In fact, if these jobs are timetabled well, they can help
ensure a good spread of incident responders over the area at all times, thus
lowering response time to potential emergencies. If the
challenge were just to distribute emergency responders over a network to
minimise expected emergency response time, this would be a k-medians problem
(K-MED). If the challenge were just to route agents efficiently to fit many
non-urgent jobs in a shift, this would be a Vehicle Routing Problem with Time
Windows (VRPTW). The problem at hand, however, lies on an unexpected boundary
between the two: in the Median Routing Problem (MRP), agents must be routed
over jobs, much like VRPTW, but with the response time minimisation objective
of K-MED (summed over discrete time). MRP has
K-MED and VRPTW as special cases, but is in practice much more difficult. In
this talk, the Median Routing Problem will be introduced as a model, its
theoretical and empirical complexity will be discussed and several exact and
heuristic solution methods will be presented. All colleagues are welcome to
come offer their ideas and perspectives on this challenging problem. Vincent Karels Eindhoven University of Technology Supervisors: L.P. Veelenturf, T. van
Woensel Title: An
exact algorithm for a stochastic vehicle routing problem Abstract: In the modern world,
competition, public expectations and regulating authorities put a lot of
pressure on logistics service providers. The challenge to achieve sustainable
physical distribution practices has become harder and harder to solve. The
scientific community has subsequently dedicated substantial effort into finding
solutions to this challenge, especially focusing on one of the core elements
of physical distribution, the routing. During operations logistics service
providers face uncertainty the formulated routing plan may become infeasible,
forcing decisions that incur extra costs in order to return to feasibility.
Problems that consider these uncertainties are known as stochastic routing
problems and such a problem is investigated. Consider a logistics service provider that provides distribution
services for a set of customers. These customers are subdivided into two
categories, which are based on the time period in which they inform the
logistics service provider of their orders. Customers of the first category
inform the logistics service provider about their orders two days in advance.
In return, on the same day, these customers want information about their
estimated time of arrival, as well as which driver is assigned to their
order. At this time, customers of the second category are stochastic in both
presence and demand, as they order one day in advance. The objective is to
minimize the final routing cost for all customers, constrained by the
information given to customers of the first category. For this problem, an exact algorithm is presented.
Numerical results and comparisons to existing problems in the
literature will be investigated. Jan Klein CWI Supervisors Rob van der Mei, Sandjai
Bhulai, Mark Hoogendoorn Title Detecting Network Intrusion Beyond 1999: Applying
Machine Learning Techniques to a Partially Labeled
Cybersecurity Dataset Abstract This talk demonstrates how different machine
learning techniques performed on a recent, partially labeled
dataset (based on the Locked Shields 2017 exercise) and which features were
deemed important. Moreover, a cybersecurity expert analyzed
the results and validated that the models were able to classify the known
intrusions as malicious and that they discovered new attacks. In a set of 500
detected anomalies, 50 previously unknown intrusions were found. Given that
such observations are uncommon, this indicates how well an unlabeled dataset can be used to construct and to
evaluate a network intrusion detection system. Stefan Klootwijk Supervisor Bodo Manthey Title Probabilistic Analysis of Optimization Problems on Generalized
Random Shortest Path Metrics Abstract Simple heuristics often show a remarkable performance in practice for
optimization problems. Worst-case analysis often falls short of explaining
this performance. Because of this, "beyond worst-case analysis" of algorithms
has recently gained a lot of attention, including probabilistic analysis of
algorithms. The
instances of many optimization problems are essentially a discrete metric
space. Probabilistic analysis for such metric optimization problems has
nevertheless mostly been conducted on instances drawn from Euclidean space,
which provides a structure that is usually heavily exploited in the analysis.
However, most instances from practice are not Euclidean. Little work has been
done on metric instances drawn from other, more realistic, distributions.
Some initial results have been obtained by Bringmann
et al. (Algorithmica, 2013), who have used
random shortest path metrics on complete graphs to analyze
heuristics. The
goal of this talk is to generalize these findings to non-complete graphs,
especially Erdös-Rényi random graphs. A random
shortest path metric is constructed by drawing independent random edge
weights for each edge in the graph and setting the distance between every
pair of vertices to the length of a shortest path between them with respect
to the drawn weights. For such instances, we prove that the greedy heuristic
for the minimum distance maximum matching problem, the nearest neighbor and insertion heuristics for the traveling
salesman problem, and a trivial heuristic for the k‑median
problem all achieve a constant expected approximation ratio. Additionally, we
show a polynomial upper bound for the expected number of iterations of the
2-opt heuristic for the traveling salesman problem. Xianghui Li Supervisor Marc Uetz Title Stabilization of the Shapley value for cooperative games Abstract In this paper, we explore a combination of coalition formation and stability for cooperative games. We assume that for a given allocation rule, it is possible for a set of players to improve their payoffs if the worth of the coalition they form exceeds the sum of their individual payoffs. In general, it is clear that a solution for cooperative games which is immune to any potential blocking coalition may not exist, because the core can be empty. To solve this difficulty, we introduce a concept of levels core by the aid of a levels structure. We also show how to construct an element of the levels core by a procedure which is interpreted as the ``stabilization'' of the Shapley value, and which generally works for all superadditive games even if the core is empty. Moreover, the outcome of the procedure even fulfils a strong Nash Equilibrium, i.e., for each level of the levels structure, no single subset of 'players' has the ability to increase their gains by joining other coalitions. Finally, the same procedural idea is applied for any situation where the levels structure is exogenously given, and a value is proposed.
Youri Raaijmakers Supervisors Onno Boxma, Sem Borst
Title Reducing latency via redundancy scheduling Abstract Redundancy scheduling has emerged as a
powerful strategy for improving response times in parallel-server systems.
The key feature in redundancy scheduling is replication of a job upon arrival
by dispatching replicas to different servers. Redundant copies are abandoned
as soon as the first of these replicas finishes service. By creating multiple
service opportunities, redundancy scheduling increases the chance of a fast
response from a server that is quick to provide service and mitigates the
risk of a long delay incurred when a single selected server turns out to be
slow. We introduce a quite general workload model,
in which job sizes have some probability distribution while the speeds
(slowdown factors) of the various servers for a given job are allowed to be
inter-dependent and non-identically distributed. This allows not only for inherent
speed differences among different servers, but also for affinity relations. We further propose two novel redundancy
policies, so-called delta-probe-d policies,
where d probes of a fixed, small,
size are created for each incoming job, and assigned to d servers selected uniformly at random. As soon as the first of
these d probe tasks finishes, the
actual job is assigned for execution --
with the same speed -- to the
corresponding server and the other probe tasks are abandoned. We also consider
a delta-probe d policy in which the probes
receive preemptive-resume priority over regular jobs. The aim of these policies
is to retain the benefits of redundancy d
policies while accounting for systematic speed differences and mitigating the
risks of running replicas of the full job simultaneously for long periods of
time. Analytical and numerical results are presented to evaluate the effect of
both probing policies on the job latency, and to illustrate the potential
performance improvements. Jason Rhuggenaath Supervisors Uzay Kaymak, Yingqian Zhang, Alp Akçay Title Dynamic
pricing and learning with censored demand and limited price changes Abstract In this work we study a dynamic
pricing problem with demand censoring and limited price changes. In our
problem there is a seller of a single product that aims to maximize revenue
over a finite sales horizon. However, the seller only has limited information
about demand for his product. We assume that the demand is the sum of a
non-increasing function of price plus and an error term with mean zero. The
seller does not know the form of the mean demand function but does have some
limited knowledge. We assume that the seller has a hypothesis set of mean
demand functions and that the true mean demand function is an element of this
set. The seller does not know the distribution of the error term a priori,
except that the distribution is bounded. Furthermore, the seller faces a
business constraint on the number of price changes that is allowed during the
sales horizon. More specifically, the seller is allowed to change the price
at most m times during the sales horizon. We furthermore assume that the
seller can only observe the sales (minimum between realized demand and
available inventory) and thus that demand is censored. In each period the
seller can replenish his inventory to a particular level and inventory left
at the end of a period is lost and has no value. The objective of the seller is to set the best price and
inventory level in each period of the sales horizon in order to maximize his
profit. The profit is determined by the revenue of the sales minus holding
costs and costs for lost sales (unsatisfied demand). In determining the best
price and inventory level the seller faces and exploration-exploitation
trade-off. The seller has to experiment with different prices and inventory
levels in order to learn from historical sales data which contains information
about market responses to offered prices. On the other hand, the seller also needs to exploit what it has
learned and set prices and inventory levels that are optimal given the
information collected so far. We propose a policy for this problem and study
the performance using numerical experiments. The initial results indicate
that the regret of the policy is sublinear with respect to the sales horizon.
Martijn Schoot Uiterkamp
University of Twente m.h.h.schootuiterkamp@utwente.nl Supervisors Johann Hurink, Gerard Smit Title A framework for convex multi-stage
optimization problems with applications in decentralized energy management Abstract Multi-stage
optimization problems under uncertain cost functions occur frequently in many
applications such as production planning, portfolio optimization, and
scheduling. Several concepts exist to solve this type of problem, e.g.,
robust optimization, stochastic programming, and online optimization. These
approaches have in common that at each stage, a decision is made before the
corresponding cost of this decision is revealed. However, in some
applications, this order is reversed, meaning that the cost function for the
given stage becomes known before making the corresponding decision. This
happens, e.g., when the cost of a decision depends on real-time data
measurements. As a consequence, the revealed cost function can be taken into
account when making the decision. In
this talk, we present a framework for convex multi-stage optimization
problems that have the aforementioned relation between cost function and
decision making. In this approach, we split up the problem into subproblems
for each stage. The key feature of our approach is that we compress all
information on the uncertain future cost functions into characterizing
values. At each stage, these values serve as input for the subproblem of the
current stage. A big advantage of this approach is that no prediction of the
cost functions is required. This means that we do not need to directly
predict the uncertain data underlying the cost functions, which is often very
difficult in practice. Instead, we only need to predict the characterizing
values, which is much easier in many applications. We apply our framework to
specific device-level optimization problems in decentralized energy
management for smart grids and show that specific problem structures can be
exploited to significantly reduce the complexity of the subproblems. Albert Schrotenboer Supervisors Iris Vis, Evrim Ursavas Title Valid Inequalities and a Branch-and-Cut framework for Asymmetric
Multi-Depot Routing Problems (Authors: Michiel A.J. uit het Broek, Albert H. Schrotenboer, Bolor Jargalsaikhan, Kees Jan
Roodbergen, Leandro C. Coelho) Panfei Sun Supervisor Marc Uetz Title The general compromise value for cooperative games with
transferable utility Abstract In this paper, we introduce a
solution concept, the general compromise value, for cooperative games with transferable utility. Different from the existing
compromise values in literature, we define the general compromise value with
respective to a set of potential payoffs, of which the maximal and minimal
potential payoffs are regarded as the upper and lower bound for players. More
specifically, the unique pre-imputation on the straight line segment between
these two vectors is defined as the general compromise value. By considering
different sets of potential payoffs, the corresponding general compromise
value coincides with several classical solution concepts, such as the
tau-value and the chi-value, which are both well-known compromise solution
concepts. Moreover, we provide an axiomatization of the general compromise
value based on the relative invariance under S-equivalence and the maximal
(minimal) proportional property. Stefan ten Eikelder Supervisor Dick den Hertog Title Adjustable robust treatment-length optimization in radiation
therapy Abstract One
of the main promises for the future of radiation oncology treatment planning
is adaptive treatments. Adaptive treatments aim to monitor treatment, acquire
mid-treatment biomarkers, and adapt the remainder of the treatment course
according to observed biomarkers. One aspect of adaptive treatments is
optimally designing the remaining treatment dose and duration once biomarker
data is observed; this gives rise to an adjustable optimization problem. We
consider robust adaptive treatment-length optimization based on mid-treatment
acquired biomarker information, with emphasis on the inexact nature of the
acquired biomarker data. We employ adjustable robust optimization (ARO)
methodology to formulate and solve these resulting optimization problems,
both for exact and inexact biomarker data. We are able to derive explicit
formulas for the second stage decisions. A numerical study is performed based on data
from liver patients, and results are compared to standard nominal and robust
folding horizon methods. In case of inexact biomarker data, nominal methods
lead to severe healthy tissue dose tolerance violations. Furthermore, by
taking into account the improvement in biomarker accuracy as treatment
progresses, we determine the optimal moment of biomarker observation. We find
that, in order to achieve considerable improvements with treatment
adaptations, it is essential to acquire high quality biomarker data early on during
treatment. Rik
Timmerman Supervisors Marko Boon, Johan van Leeuwaarden, Onno Boxma, Ivo Adan Title Dimensioning of the Fixed-Cycle
Traffic-Light queue Abstract The Fixed-Cycle Traffic-Light queue is the standard model for intersections with static signaling. Vehicles arrive at a lane at the intersection, controlled by a traffic light, form a queue and depart during fixed cycles. Recently, a Pollaczek contour-integral expression for the generating function of the queue length in a single lane has been derived. This result can be used to obtain results on the queue-length distribution in e.g. the Quality-and-Efficiency Driven (QED) regime, when choosing a proper scaling. The QED regime ensures (a.o.) that, even when the load on the system tends to 1, there is a strictly positive probability that arriving vehicles experience no delay.
Rolf van Lieshout Erasmus University Rotterdam Supervisor Paul Bouman, Dennis Huisman Title
Determining and Evaluating Alternative Line Plans in
(Near) Out-of-Control Situations Abstract From time to
time, large disruptions cause heavily utilized railway networks to get in a
state of (near) out-of-control, in which hardly any trains are able to run as
the result of a lack of accurate and up-to-date information available to
dispatchers. In this paper, we develop and test disruption management
strategies for dealing with these situations. First, we propose an algorithm
that finds an alternative line plan that can be operated in the affected part
of the railway network. As the line plan should be feasible with respect to
infrastructural and resource restrictions, we integrate these aspects in the
algorithm in a Benders'-like fashion. Second, to operate the railway system
within the disrupted region, we propose several local train dispatching strategies
requiring varying degrees of flexibility and coordination. Computational
experiments based on disruptions in the Dutch railway network indicate that
the algorithm performs well, finding workable and passenger oriented line
plans within a couple of minutes. Moreover, we also demonstrate in a
simulation study that the produced line plans can be operated smoothly
without depending on central coordination. Bernard Zweers CWI Supervisors Rob van der Mei, Sandjai Bhulai,
Guido Schäfer Title Cluster capacitated
multiple knapsack, a journey towards an approximation algorithm Abstract In this presentation, we will introduce a new extension of the
Multiple Knapsack Problem (MKP) and give an approximation algorithm for this
problem. In the MKP, there are multiple knapsacks with a fixed capacity to
which a set of items may be assigned. Each item has a given profit and a
weight and can be assigned to at most one knapsack. The goal of the MKP is to
maximize the profit of assigned items such that the capacities of the
knapsacks are not violated. The decision version of the integral version of
the MKP is strongly NP-complete, but a simple greedy algorithm gives at least
half of the maximum possible profit, i.e., there exists a 0.5-approximation
algorithm. We introduce a natural extension of the MKP,
which we will call the Cluster Capacitated Multiple Knapsack Problem (CCMKP).
In this problem, each knapsack is assigned to a cluster, which might contain
multiple knapsacks. Just as the knapsacks, each cluster also has a capacity,
which is smaller than the sum of the capacities of the knapsacks it contains.
The items are still assigned to knapsacks, but in the CCMKP both the total
weight assigned to a knapsack may not violate the knapsack capacity and the
total weight assigned to all knapsacks in a cluster may not violate the
capacity of the cluster. We will first show that the 0.5-approximation
algorithm for the MKP may give an infeasible solution. However, we propose a
similar greedy algorithm which is still a 0.5-approximation algorithm for the
CCMKP. |