Conference 2019
Top image

 
Home
Program LNMB conference
Registration LNMB Conference
Invited Speakers LNMB Conference
Program PhD presentations
Abstracts PhD presentations
Announcement NGB/LNMB Seminar
Abstracts/Bios NGB/LNMB Seminar
Registration NGB/LNMB Seminar
Conference Office
How to get there
 
Return to LNMB Site
 

Abstract and Bio Speakers NGB/LNMB Seminar

Deep Learning and its impact on Operations Research

Marco Lübbecke(RWTH Aachen)

Short Bio: Marco Lübbecke is a full professor and chair of operations research at RWTH Aachen University, Germany. He received his Ph.D. in applied mathematics from TU Braunschweig in 2001 and held positions as assistant professor for combinatorial optimization and graph algorithms at TU Berlin and as visiting professor for discrete optimization at TU Darmstadt.
Marco's research and teaching interests are in computational integer programming and discrete optimization, covering the entire spectrum from fundamental research and methods development to industry scale applications. A particular focus of his work is on decomposition approaches to exactly solving large-scale real-world optimization problems. This touches on mathematics, computer science, business, and engineering alike and rings with his appreciation for fascinating interdisciplinary challenges.

Title: (How) does Machine Learning impact Operations Research?

Abstract: Machine learning (ML), artificial intelligence (AI), and operations research (OR) are long standing and well established research fields. From a research perspective, there are some natural directions where ML/AI and OR may benefit from each other (and we sketch a few). From an applications perspective, there is the natural ordering of ML/AI (``predictive'') first, OR (``prescriptive'') second, but other cases are conceivable. In this talk, I will discuss some research, applications, and perspectives that I personally find challenging, interesting, exciting, or boring. In particular, a recent surge of interest (``hype'' is not exaggerated) pushed ML and AI into companies/startups and newspapers. Not so much OR. I discuss implications for ``our'' field (which I still think is OR).


Eric Postma (Tilburg University, JADS)

Short Bio: Eric Postma is full professor Artificial Intelligence at the Cognitive Science & AI department of Tilburg University. He is also affiliated to the Jheronimus Academy of Data Science in 's-Hertogenbosch (JADS), a joint initiative of Eindhoven University of Technology and Tilburg University. Postma's research focusses on the development and application of machine learning methods to the analysis of signals, images, and data in general. He has been working on neural network and machine learning methods since 1990 and has a particular interest in understanding human perception and cognition through neural network models.

Title: The true AI revolution

Abstract: The entire AI revolution is carried by a 30-year old method, which is nowadays referred to as deep learning. The effectiveness of deep learning gave rise to revolutionary image, signal and text recognition performances. In the past five years, impressive strides have been made on these tasks. The successes are mainly attributable to the increased volumes of data and the advances in computational resources. More importantly, the effectiveness of deep learning provides interesting insights into the theory of machine learning. In the presentation, the true AI revolution will be explained and contrasted with the overhyped media image of AI. Starting from a brief history of AI and neural networks, the presentation will zoom in on deep learning and the applied and fundamental research that is fuelling the true AI revolution.


Johan van Rooij (CQM)

Short Bio: Johan van Rooij works as Senior Consultant or Senior Data Scientist at CQM Eindhoven and is part-time assistant professor at Utrecht University. At CQM Johan works on practical algorithmic challenges, both in the optimization (OR) field as in the Machine Learning field. He has developed algorithms in the broadest sense, from algorithms controlling automated guided vehicles or algorithms optimising logistic problems, to image processing using deep leaning or a search engine for technical drawings. At Utrecht University Johan works on theoretical graph algorithms in the Algorithms and Complexity group of Professor Hans L. Bodlaender, the group in which he did his PhD under supervision of Jan van Leeuwen and Hans L. Bodlander.

Title: Safe and efficient inspection of railway tracks using deep learning models (Winner Hendrik Lorentz Prize for innovative applications of Data Science)

Abstract: The Netherlands have the most heavily used railway network in Europe. Every day, travellers make 1.1 million trips by train, 152 million train kilometres in total. This indicates the importance of efficient and thoughtful usage of the railway infrastructure. By monitoring and inspecting the metal railway tracks and the switches, early stage detection of defects is possible, allowing cheaper and timely interventions. This does not only increase the safety level, but also the availability of the heavily used tracks.
Inspectation and CQM have developed an image processing solution for the automatic inspection of railway tracks. In the old days, inspectors had to walk over the tracks to inspect them, which was not without danger. Today, specially equipped trains with cameras make detailed pictures of the railway tracks. This gives us a huge amount of imagery, which has to be looked at by the same inspectors. CQM and Inspectation drastically improve this process using Deep Learning techniques, building a solution for automatic detection of possible defects saving the inspectors a lot of time and allowing cheaper and more frequent railway inspections. The project was awarded the Hendrik Lorentz Prize at the 'Nederlandse Data Science Prijzen' of the Koninklijke Hollandsche Maatschappij der Wetenschappen (KHMW) en Big Data Alliance (BDA).
This talk will be about the practical application, a little bit of the mathematics behind the deep learning model, how to make all of this actually work in practice, and about how to gain trust in the model, not purely using it as a black-box model. But most of all, in this talk, I aim to show that deep learning models are not totally new, maybe scary IA methods, but closely related to methods and models that we as operations researchers, computer scientists and statisticians are familiar with.


Werner van Westering (Alliander)

Short Bio: Werner van Westering is senior data scientist at Alliander, the largest electricity and gas network operator in the Netherlands, serving over 3 million households. His main field of expertise is a high performance computing applied to simulating extremely large electricity networks to solve large scale optimization problems. He pioneered the 'Alliander Network Decision Support (ANDES)' model, which simulates the energy transition and is used for major policy decisions.

Title: How to predict the impact of the energy transition on the electricity network using simulations, data science and machine learning?

Abstract: The energy system is changing rapidly as renewables are adopted by customers. How can Alliander, which operates over 40.000 kilometers of electricity cables, cope with the vastly increased network peak load? To model these phenomena the data scientists at Alliander used an array of methods: machine learning for estimating spatial adoption of renewables, high performance computation and optimization for grid design and neural network for time series prediction.


Max Welling (Universteit van Amsterdam)

Short Bio: Prof. Dr. Max Welling is a research chair in Machine Learning at the University of Amsterdam and a VP Technologies at Qualcomm. He has a secondary appointment as a senior fellow at the Canadian Institute for Advanced Research (CIFAR). He is co-founder of 'Scyfer BV' a university spin-off in deep learning which got acquired by Qualcomm in summer 2017. In the past he held postdoctoral positions at Caltech ('98-'00), UCL ('00-'01) and the U. Toronto ('01-'03). He received his PhD in '98 under supervision of Nobel laureate Prof. G. 't Hooft. Max Welling has served as associate editor in chief of IEEE TPAMI from 2011-2015 (impact factor 4.8). He serves on the board of the NIPS foundation since 2015 (the largest conference in machine learning) and has been program chair and general chair of NIPS in 2013 and 2014 respectively. He was also program chair of AISTATS in 2009 and ECCV in 2016 and general chair of MIDL 2018. He has served on the editorial boards of JMLR and JML and was an associate editor for Neurocomputing, JCGS and TPAMI. He received multiple grants from Google, Facebook, Yahoo, NSF, NIH, NWO and ONR-MURI among which an NSF career grant in 2005. He is recipient of the ECCV Koenderink Prize in 2010. Welling is in the board of the Data Science Research Center in Amsterdam, he directs the Amsterdam Machine Learning Lab (AMLAB), and co-directs the Qualcomm-UvA deep learning lab (QUVA) and the Bosch-UvA Deep Learning lab (DELTA). Max Welling has over 250 scientific publications in machine learning, computer vision, statistics and physics and an h-index of 54.

Title: Learning to Solve OR Problems

Abstract: Deep learning has revolutionized scientific fields such as "automatic speech recognition" and in "image analysis" by replacing hand designed features with learnable features obtained by training a deep neural network on the raw input signal. These fields have now almost completely converted to a new paradigm where machine learning is driving their stunning progress. In this talk I will argue there is an interesting parallel in the field of OR where optimization algorithms are mostly hand designed rather than learned. The core idea of our proposal is to present many (possibly simulated) example problems and try to discover patterns in how to solve them most effectively. The machine learning tool to achieve this is called reinforcement learning where a policy is trained to maximize total future reward, where reward is defined in terms of the quality of the obtained solution. This approach has been highly successful for solving games such as GO and chess by systems such as AlphaGO and AlphaZero from Deepmind, and we claim it is also applicable to OR. By learning from many problem instances we automatically discover the features and patterns that are useful to solve new problem instances. We successfully developed such a reinforcement learning approach for a family of problems such as the TSP and the VRP which exhibit performance very close to the best hand designed systems. We thus anticipate that (partially) learnable policies for solving combinatorial optimization problems has the potential to make a significant impact in the field of OR.

Joint work with Wouter Kool.


Eline Werkman (ORTEC)

Short Bio: Eline started working for ORTEC in 2011, directly after she finished her master Econometrics and Operations Research at the Vrije Universteit Amsterdam (VU Amsterdam).
She started her career developing O&D Revenue Management software for a leading airline. Afterwards, she further specialized in the field of Pricing and Revenue Management at ORTEC.
With this experience she has been able to help several other companies in the Aviation, Travel and Leisure business industry, increasing their revenues by growing on the Revenue Management maturity curve. From acting on touch to fact-based decision making.

Title: Advanced modelling techniques applied on a Revenue Management solution for a holiday park provider

Abstract: In a period of 15 months ORTEC has implemented an advanced Revenue Management solution for a leading provider of holiday parks. With over 150 holiday parks, 300.000+ price points need to be monitored and adjusted on a daily basis to offer their loyal and new guests the best price. The Revenue Management solution provides an all-inclusive overview and insights that allow for price adjustments based on consumer, leading to customized prices and ultimately higher revenues. The implemented solution improves traditional forecasting methods by applying machine learning methods (clustering and regression). An intelligent custom solution engine is integrated with a foolproof user interface, a proven dashboarding platform and an existing booking system. Furthermore the solution encourages man-machine interaction, where the combination of human business knowledge and automatic machine calculations leads to great decisions.
The advanced solution is descriptive (data visualization), predictive (forecasting) as well as prescriptive (optimization). The forecasting model uses statistical modeling and machine learning techniques. It detects patterns in historical booking data and applies these patterns to make forecasts, considering both historical bookings and the current bookings at hand. A Linear Programming model is used to solve the optimization problem, which determines prices that maximize total profit.
Before the introduction of this fully automated Revenue Management solution, many manual activities and analyses were required. The manual activities meant focusing on a small selection of price points, especially those points in the far future received less attention. In the new situation, all price points are optimized and, due to the numerous insights, the Revenue Managers can now quickly spot and respond to new trends.