Stochastic programming is a useful framework for dealing with uncertainty and integrality requirements in optimization problems. In this talk, we focus on two-stage stochastic integer programs (2SIPs), in particular their sample average approximations which yield computationally challenging mixed-integer programming (MIP) models. For various classes of 2SIPs, we review commonly used decomposition approaches (most notably logic-based Benders decomposition variants), and present some important enhancement strategies such as incorporation of general purpose cutting planes from the MIP literature. We also introduce several recently proposed techniques, such as the uses of
For the presented methods, we provide numerical results on problems from a variety of application domains.
Multistage stochastic mixed-integer programs (MSMIPs) can model complex sequential decision-making problems under uncertainty and appear in many applications. However, due to both the stochastic and integer components, their inherent computational challenges require sophisticated solution methodologies. In this talk, we will review recent advances in solving MSMIPs, in particular for the scenario-tree-based approaches. We will discuss exact methods, bounding ideas, partially extended formulations, and various policy classes, including aggregation-based policies, decision-rule-based policies, and interpretable policies.
After a brief introduction in the area of Queueing Games (also known as Rational Queueing), we will focus on the impact of information that is provided to the customers on their strategic behavior and associated social welfare and system revenue. We will compare various information structures and discuss their pros and cons.
We will present some classical and recent queueing models with strategic customers who are heterogeneous in some respects. We will discuss the implications of heterogeneity on the economic performance of the systems under study and on the optimization problems faced by a social planner or a monopolist.
Social and economic networks are often multiplexed, meaning that people are connected by different types of relationships—such as borrowing goods and giving advice. We make three contributions to the study of multiplexing. First, we document empirical multiplexing patterns in Indian village data. Second, we examine how these layers and their overlap affect information diffusion in a field experiment. This leads to our third contribution: developing a model and theoretical results about diffusion in multiplex networks. Multiplexing slows the spread of simple contagions, such as diseases or basic information, but can either impede or enhance the spread of complex contagions, such as new technologies, depending on their virality. Finally, we identify differences in multiplexing by gender and connectedness.
By varying prompts to a large language model, we can elicit the full range of human behaviors in a variety of different scenarios in classic economic games. By analyzing which prompts elicit which behaviors, we can categorize and compare different strategic situations, which can also help provide insight into what different economic scenarios induce people to think about. We discuss how this provides a step towards a non-standard method of inferring (deciphering) the motivations behind human behaviors. We also show how this deciphering process can be used to categorize differences in the behavioral tendencies of different populations.
Low rank structure is pervasive in real-world datasets. This talk shows how to accelerate the solution of fundamental computational problems, including eigenvalue decomposition, linear system solves, composite convex optimization, and stochastic optimization (including deep learning), by exploiting this low rank structure. We present a simple method based on randomized numerical linear algebra for efficiently computing approximate top eigendecompositions, which can be used to replace large matrices (such as Hessians and constraint matrices) with low rank surrogates that are faster to apply and invert. The resulting solvers for linear systems (NystromPCG), composite convex optimization (NysADMM), and stochastic optimization (SketchySGD and PROMISE) demonstrate strong theoretical and numerical support, outperforming state-of-the-art methods in terms of speed and robustness to hyperparameters.
We introduce a framework to accelerate the convergence of gradient-based methods with online learning. The framework learns to scale the gradient at each iteration through an online learning algorithm and provably accelerates gradient-based methods asymptotically. In contrast with previous literature, where convergence is established based on worst-case analysis, our framework provides a strong convergence guarantee with respect to the optimal stepsize for the iteration trajectory. We obtain the first convergence guarantee for the widely-used hypergradient descent method (HDM) and for new variants with heavy-ball momentum (HDM-HB) and Nesterov momentum. HDM-HB significantly outperforms other adaptive first-order methods, and often matches the performance of L-BFGS using less memory and cheaper iterations.
The success of operations research has always depended on our ability to model reality — but real-world problems are rarely as clean as our equations. Generative AI, and in particular large language models, open new possibilities for turning ambiguous requirements, messy data, and human intent into structured problems solvable by OR. In this talk, we explore this emerging connection, outline how recent AI developments complement traditional optimization, and consider both opportunities and limitations. The session will set the stage for a day of ideas at the intersection of mathematics, AI, and decision-making.
Optimization problems are pervasive in sectors from manufacturing and distribution to healthcare. However, most such problems are still solved heuristically by hand rather than optimally by state-of-the-art solvers, as the expertise required to formulate and solve these problems limits the widespread adoption of optimization tools and techniques.
As a glimpse of the future, this talk will introduce OptiMUS, a Large Language Model (LLM)-based agent designed to formulate and solve MILP problems from natural language descriptions. OptiMUS can develop mathematical models, write and debug solver code, develop tests, and check the validity of generated solutions. More broadly, we discuss the potential for LLMs in domains where accuracy and fidelity to real-world data is critical and strategies to augment and safeguard their performance.