8:00 
8:50 
Registration and breakfast

8:50 
9:00 
Opening remarks

9:00 
10:00 
Plenary talk
An Optimization Perspective on Trustworthiness and Trust in Autonomous Systems
Natalia Alexandrov
abstract
An Optimization Perspective on Trustworthiness and Trust in Autonomous Systems
Natalia Alexandrov
The application domain of the work described in this talk is the neartofarfuture airspace, where the projected density and heterogeneity of autonomous participants, including noncooperative agents, combine to increase system complexity and uncertainty, with ensuing threats to safety. Given the increased complexity, control of airspace will have to transition to humanmachine teams, with ever rising authority of autonomous systems (AS). The growing use of AS leads to a potential paradox: AS are meant to address system uncertainty; however, machine authority and humanmachine interactions are themselves major sources of uncertainty in the system. Because trustworthiness and trust are connected to decision making, which, in turn is an optimization problem, subject to expressed and unexpressed constraints, in this presentation we examine the nature of the attendant optimization problems, discuss some approaches to solutions, as well as persistent gaps.


10:00 
10:15 
Coffee break

10:15 
11:45 
Parallel technical sessions
Predictive disease modeling: Session 1 (Room 91)

Games: theory and applications 1 (Room 85)

Energy  II (Room 271)

Computational Nonlinear Optimization with Applications (Room 241)

Estimating the Cross Immunity Between Drifted Strains of Influenza A/H3N2
Junling Ma
abstract
Estimating the Cross Immunity Between Drifted Strains of Influenza A/H3N2
Junling Ma
Abstract:
To determine the crossimmunity between influenza strains, we design a novel statistical method, which uses a theoretical model and clinical data on attack rates and vaccine efficacy among school children for two seasons after the 1968 A/H3N2 influenza pandemic. This model incorporates the distribution of susceptibility and the dependence of crossimmunity on the antigenic distance of drifted strains. We find that the crossimmunity between an influenza strain and the mutant that causes the next epidemic is 88%. Our method also gives estimates of the vaccine protection against the vaccinating strain, and the basic reproduction number of the 1968 pandemic influenza.

A gametheoretic model for emergency preparedness among NGO's
Myles Nahirniak
abstract
A gametheoretic model for emergency preparedness among NGO's
Myles Nahirniak
Natural disasters in the United States last year resulted in a cost of almost \$100 billion. In this talk, we consider the issue of disaster relief efforts by NonGovernmental Organizations (NGOs). We develop an asymmetric game to describe the behavior of two NGOs competing to purchase supplies. The players' payoff matrices depend on their level of preparedness for a disaster, and failure to make adequate provisions results in financial penalty. A replicator dynamics is introduced to investigate the time evolution of the players' optimal strategies, as well as stability. Finally, we impose a shared constraint, in order to place an upper limit on the total available supplies, and examine the effect on the players' strategies.

Distribution Electricity Pricing under Uncertainty
Yury Dvorkin
abstract
Distribution Electricity Pricing under Uncertainty
Yury Dvorkin
Distribution locational marginal prices (DLMPs) facilitate the efficient operation of lowvoltage electric power distribution systems. We propose an approach to internalize the stochasticity of renewable distributed energy resources (DERs) and risk tolerance of the distribution system operator in DLMP computations. This is achieved by means of applying conic duality to a chanceconstrained AC optimal power flow. We show that the resulting DLMPs consist of the terms that allow to itemize the prices for the active and reactive power production, balancing regulation, and voltage support provided. Finally, we prove the proposed DLMP constitute a competitive equilibrium, which can be leveraged for designing a distribution electricity market, and show that imposing chance constraints on voltage limits distort the equilibrium.

An Empirical Quantification of the Impact of Choice Constraints on Generalizations of the 01 Knapsack Problem using CPLEX®
Yun, Lu Francis, Vasko
abstract
An Empirical Quantification of the Impact of Choice Constraints on Generalizations of the 01 Knapsack Problem using CPLEX®
Yun, Lu Francis, Vasko
It has been wellknown for some time that adding choice constraints to certain types of knapsack formulations improves the solution time for these problems when using integer programming solvers, but by how much? In this paper, by using the integer programming option of CPLEX, we provide comprehensive empirical and analytical evidence of the impact of choice constraints on two important categories of knapsack problems. Specifically, we show using multidimensional knapsack problems (MKP) and multidemand multidimensional knapsack problems from Beasley’s ORLibrary that adding choice constraints reduces solution time by more than 99.9%. Additionally, using these same problem instances, we show that even if only some of the variables have choice constraints imposed on them, the CPLEX solution times are drastically reduced. These results provide motivation for operations research practitioners to check if choice constraints are applicable when solving realworld problems involving generalizations of the 01 knapsack problem.

Novel compartmental models of infectious disease
Scott Greenhalgh
abstract
Novel compartmental models of infectious disease
Scott Greenhalgh
Many methodologies in disease modeling have proven invaluable in the evaluation of health interventions. Of these methodologies, one of most fundamental is compartmental modeling. Compartmental models come in many different forms with one of the most general characterizations occurring from the description of disease dynamics with nonlinear Volterra integral equations. Despite this generality, the vast majority of disease modellers prefer the special case whereby the nonlinear Volterra integral equations are reduce to systems of differential equations through the traditional assumptions that 1) the infectiousness of a disease corresponds to incidence, and 2) the duration of infection follows either an Exponential distribution or an Erlang distribution. However, these assumptions are not the only ones that simplify nonlinear Volterra integral equations in such a way.
In this talk, we demonstrate how a biologically more accurate description of the infectiousness of a disease combines with nonlinear Volterra integral equations to yield a novel class of differential equation compartmental models, and then illustrate the novelty of this approach with several examples.

Generalized Nash games and replicator dynamics
Monica Cojocaru
abstract
Generalized Nash games and replicator dynamics
Monica Cojocaru
In this talk I plan to introduce an evolutionary generalized Nash game (eGN). Generalized Nash games were introduced in the 50’s, and represent models of noncooperative behaviour among players whose both strategy sets and payoff functions depend on strategy choices of other players.
Among these games, a specific class is represented by evolutionary games, which consist of populations where individuals play many times, against many different opponents, with each contributing a relatively
small contribution to the total reward. Given strategies {1, ..., n}, an individual of type i is one using strategy i, where xi
is the frequency of type i individuals in the population. Thus the vector x = (x1, ..., xn) in the unit simplex is the state of the
population. Interaction between players of different types can be described by linear or nonlinear payoffs. One known dynamic evolution of such a game is described by a replicator dynamics. However, assuming constraints imposed on the strategy sets of players (upper limits on resources for instance) the classic replicator dynamics is not appropriate anymore.
In these cases we show that we can reinterpret the game dynamics of an eGN differently. The new dynamics and its relation to evolutionary steady states is investigated.

Distributionally Robust Chance Constrained Optimal Power Flow Assuming Unimodal Distributions with Misspecified Modes
Bowen Li
abstract
Distributionally Robust Chance Constrained Optimal Power Flow Assuming Unimodal Distributions with Misspecified Modes
Bowen Li
Chanceconstrained optimal power flow (CCOPF) formulations have been proposed to minimize operational costs while controlling the risk arising from uncertainties like renewable generation and load consumption. To solve CCOPF, we often need access to the (true) joint probability distribution of all uncertainties, which is rarely known in practice. A solution based on a biased estimate of the distribution can result in poor reliability. To overcome this challenge, recent work has explored distributionally robust chance constraints, in which the chance constraints are satisfied over a family of distributions called the ambiguity set. Commonly, ambiguity sets are only based on moment information (e.g., mean and covariance) of the random variables; however, specifying additional characteristics of the random variables reduces conservatism and cost. Here, we consider ambiguity sets that additionally incorporate unimodality information. In practice, it is difficult to estimate the mode location from the data and so we allow it to be potentially misspecified. We formulate the problem and derive a separationbased algorithm to efficiently solve it. Finally, we evaluate the performance of the proposed approach on a modified IEEE30 bus network with wind uncertainty and compare with other distributionally robust approaches. We find that a misspecified mode significantly affects the reliability of the solution and the proposed model demonstrates a good tradeoff between cost and reliability.

OPTIMIZATION PROBLEMS IN SPACE ENGINEERING
giorgio fasano
abstract
OPTIMIZATION PROBLEMS IN SPACE ENGINEERING
giorgio fasano
Space engineering has since the very beginning represented a field where optimization methodologies were inevitable. In the earliest studies, the primary concern was related to the viability of the mission to accomplish. Therefore optimization generally focused on mission analysis aspects, with specific attention to technical feasibility and mission safety. Space engineering projects typically required the analysis and optimization of trajectories and fuel consumption. As time has passed a number of further issues, related, inter alias, to logistics and systems engineering aspects, have become increasingly important.
Among the various optimization challenges relevant to this context, object packing, in the presence of balancing conditions, as well as additional requirements, are of great relevance. This wide class of problems are notorious for being NPhard. Some packing scenarios arising in space applications are illustrated, pointing out the specificity of the instances to cope with. An overall global optimization heuristic approach is outlined, introducing a tailored modeling philosophy, as opposed to a purely algorithmic one. The orthogonal packing of threedimensional “tetrislike” items, within a convex polyhedral domain, is briefly discussed. Additional constraints are also investigated, including balancing requirements.
The general spacecraft control dispatch problem is proposed as a further optimization challenge in space applications. A dedicated controller has the task of determining the overall control action, to achieve the desired system attitude. A number of thrusters are available to exert the overall forces and torques, as required. Their positions and orientations on the external surface of the spacecraft is usually critical, since different layouts may yield different performance, in terms of energy usage throughout the entire mission. Once on orbit, the requested spacecraft control has to be dispatched, in compliance with given operational constraints, through the thrusters, with the aim of minimizing the overall propellant consumption. The relevant mathematical models and the overall heuristic approach adopted are overviewed.

Using a realistic SEIR model to assess the burden of vaccinepreventable diseases in China and design optimal mitigation strategies
John Glasser
abstract
Using a realistic SEIR model to assess the burden of vaccinepreventable diseases in China and design optimal mitigation strategies
John Glasser
Mathematical modelers often make simplifying assumptions, but only rarely ascertain the consequences. We developed an agestratified population model with births, deaths, aging and – because nonrandom mixing exacerbates the effect of heterogeneity on reproduction numbers (JTB 2015; 386:17787) – realistic mixing between age groups. Using this model, we assessed the burden of congenital rubella syndrome (CRS) and evaluated the impact of demographic details on optimal vaccination strategies for disease elimination. Our estimate of the burden of CRS is 5 times that of Vynnycky, et al. (PLoS ONE 2016; 11(3): e0149160), and the 2014 serological profile suggests that it will dramatically increase absent timely immunization of susceptible adolescents. As is typical of developed countries, China’s mortality schedule is type I. The impact on the optimal strategy of assuming type II mortality is modest, but our approach will be useful when and wherever vaccine availability is limited.

On Decomposition Methods for Generalized Nash Equilibrium Problems
Tangi Migot
abstract
On Decomposition Methods for Generalized Nash Equilibrium Problems
Tangi Migot
Generalized Nash equilibrium problems (GNEP) are a potent modeling tool that have developed a lot in recent decades. Much of this development has centered around applying variational methods to the socalled GNSC, a subset of GNEP where each player has the same constraint set. One popular approach to solve the GNSC is to use the apparent separability of each player to build a decomposition method. This method has the benefit to be easily implementable and can be parallelized. Our aim in this talk is to show an extension of the decomposition methods to the GNEP, that is not necessarily with shared constraints.

RiskSensitive Economic Dispatch: Theory and Algorithms
Subhonmesh Bose
abstract
RiskSensitive Economic Dispatch: Theory and Algorithms
Subhonmesh Bose
In economic dispatch problems with uncertain supply or network conditions, an inherent tradeoff arises between power procurement costs and reliability of power delivery. In this talk, we explore risksensitive economic dispatch problem formulations, where risk is modeled via the conditional value at risk (CVaR) measure. Such formulations allow a system operator to explore the costreliability tradeoff. We provide customized algorithms to solve these risksensitive problems—a critical region exploration algorithm to guard against possible line failures and an online stochastic primaldual subgradient method to dispatch against uncertain wind.

A General Lagrange Multipliers Based Approach for Object Packing Applications
Janos D. Pinter
abstract
A General Lagrange Multipliers Based Approach for Object Packing Applications
Janos D. Pinter
We have extended our previous work on packing circles, ellipses and generalized ellipses (ovals) in regular convex polygons.
Now we can pack ellipses, ovals and smooth polygons into convex and nonconvex regions formed by multiple ellipses and smooth
polygons. The smooth polygons are generated by aggregating the linear constraints that define the polygon into a single constraint,
using a smoothened maximum function. Our general method also allows packing nonconvex sets which are composites of ellipses and
smooth polygons. The objective function in such problems defines the size of the "container" region being packed. We cast the
nonoverlap condition for all pairs of packed objects as a minimization problem and use the method of Lagrange multipliers to
generate the nonoverlap constraints. We present a concise summary of our approach, followed by a selection of packing challenges solved numerically.


11:45 
12:00 
Coffee break

12:00 
13:00 
Plenary talk
RiskSensitive Designs, Robustness, and Stochastic Games
Tamer Basar
abstract
RiskSensitive Designs, Robustness, and Stochastic Games
Tamer Basar
I will talk about the relationship between riskaverse designs based on exponential loss functions with or without an additional unknown (adversarial) term and a class of stochastic games. This leads to a robustness interpretation for riskaverse decision rules in the general context, through a stochastic dissipation inequality. I will show, in particular, the equivalence between riskaverse linear filter designs and saddlepoint solutions of a particular stochastic differential game with asymmetric information for the players. One of the byproducts of this analysis is that riskaverse filters for linear signalmeasurement models are robust, through a stochastic dissipation inequality, to unmodeled perturbations in both the signal and the measurement processes. Extensions to nonlinear models, problems where there are multiple decision makers with only partially conflicting goals and relationship with meanfield games, and problems where there is an element of deception will also be discussed.


13:00 
14:00 
Lunch

14:00 
15:30 
Parallel technical sessions
Predictive disease modelling 2 (Room 91)

Games: theory and applications 2 (Room 85)

Energy  III (Room 271)

Advances in Numerical Optimization 1 (Room 241)

Fitting seasonal influenza epidemic curves to surveillance: Application to the French Sentinelles network
Edward Thommes
abstract
Fitting seasonal influenza epidemic curves to surveillance: Application to the French Sentinelles network
Edward Thommes
Predicting the course of an influenza season is a longstanding challenge in biomathematical modeling, and the subject of an extensive body of research. Significant success has been achieved in fitting disease transmission models to surveillance data, both retrospectively and prospectively. However, choosing a level of model sophistication appropriate to the availability of data remains a tricky balancing act. Here, we report preliminary results from a particularly simple model. We begin by testing the ability of the model to accurately extract epidemic parameters from synthetic data, using both a simple Monte Carlo sweep and an optimization approach. We demonstrate the ability of the model to simultaneously recover the onset time, effective reproduction number, and underreporting factor. We then apply our model to retrospective French surveillance data at both the national and regional level, obtained from the Réseau Sentinelles (Sentinel Network) site, http://www.sentiweb.fr/.
Funding statement: ET, AC, LC, JL, TS are employees of Sanofi Pasteur. AA, SA, MG, JH, ZM, YX and JW have received funding from Sanofi Pasteur through a research grant.

A realtime optimization with warmstart of multiperiod AC optimal power flows
Youngdae Kim
abstract
A realtime optimization with warmstart of multiperiod AC optimal power flows
Youngdae Kim
We present a realtime optimization strategy combined with warmstart for a rolling horizon method applied to multiperiod AC optimal power flow problems (MPACOPFs). In each horizon, ACOPFs are temporally interlinked via ramping constraints, and we assume that each horizon needs to be solved in every few seconds or minutes. An approximate tracking scheme combined with two warmstart methods will be described. Our scheme closely follows the solution path consisting of strongly regular points. Theoretical results bounding the tracking error to the second order of the parameter changes of a single time period will be presented. Experimental results over various sizes of network up to 9kbus will be given showing fast computation time of our method while maintaining a good solution quality, thus making it best suited to working under a realtime environment.

Computing and Calibrating Prices for Locational Variability in Power Generation and Load
Bernard Lesieutre
abstract
Computing and Calibrating Prices for Locational Variability in Power Generation and Load
Bernard Lesieutre
Fluctuations in variable power generation (e.g. wind and solar) and load are met in realtime using automatic generation control (AGC). The resources that respond to AGC control are chosen using a competitive market. In typical power systems, the costs for providing this power tracking service is shared among users, but is not weight more for those causing the variations. We cast the problem of procuring AGC resources as a chanceconstrained optimization problem and calculate a locational price of variability. In this paper we consider how this locational price of variability might be calibrated to aid in computing charges for variability and for making payments to AGC resources. This approach will allocate higher prices for power fluctuations at locations for which it is difficult to accommodate variations, and lower prices at locations where it is easier to do so.

Improving an optimization problem by redatuming via a TRAC approach
Franck Assous
abstract
Improving an optimization problem by redatuming via a TRAC approach
Franck Assous
Parameter estimation plays a crucial role in imaging whether in the medical field or in seismic exploration, and remains an active subject of research, with many applications such as tumor detection or stroke prevention in the case of medical imaging. In geophysics, seismic imaging is used as a tool for exploring subsoil for oil, gas or other deposits.
From the mathematical point of view, parameter estimation can be written as a PDEconstrained optimization problem, which tries to minimize the misfit between the recorded data and the reconstruction obtained by an estimated parameter. At each step of the optimization process, this estimation is updated to get closer to the original parameter to recover.
In this context, it is wellknown that the closer one is to the location of the parameter, the easier is the mathematical, and so the numerical, problem to solve. For this reason, the literature on this subject is large, since any method moving virtually the recording boundary closer to the parameter area, can be useful. Classical approaches generally solve least square problems based on paraxial approximations.
In this spirit, we propose here a novel method that works directly in the timedependent domain, and use the full wave equation. Note that it can be extended to other propagation problems, like for instance elastodynamics (elastic wave equation) or electromagnetism (Maxwell's equations). It basically combines an optimization method with the timereversed absorbing condition (TRAC) method we introduced several years ago, that couples timereversal techniques and absorbing boundary conditions. Mainly, we aim at reducing the size of the computational domain: in that sense, this can be related to redatuming method, that allows to reduce the size of the optimization problem, by moving virtually the recorded boundary. In our talk, we will describe our approach and illustrate its efficiency on numerical examples.

Impact of influenza vaccinemodified susceptibility and infectivity on the outcomes of immunization
Kyeongah Nah
abstract
Impact of influenza vaccinemodified susceptibility and infectivity on the outcomes of immunization
Kyeongah Nah
To help informing vaccine manufacturers and individuals at high risks of developing serious flurelated complications, we develop compartmental models to assess the impact of vaccinating the population by different types of vaccines. In this talk, I will present the balance between vaccinemodified susceptibility, infectivity and recovery needed in preventing an influenza outbreak, or in mitigating the health outcomes of the outbreak using the SIRVtype of disease transmission model. We will also observe the impact of influenza vaccination program on the infection risk of vaccinated and nonvaccinated individuals.

Decision making of the population under media effects and spread of Influenza
Safia Athar
abstract
Decision making of the population under media effects and spread of Influenza
Safia Athar
Media plays a vital role in controlling the decision making of the population in an epidemic. Influenza, a disease that affects every age group and causes mortality in many cases, is always a concern for public health. Role of massmedia reports on Influenza is studied in J. Heffernan.et.al. In the referred paper, the authors employed a stochastic agentbased model to provide a quantification of mass media reports on the variability in crucial public health measurements. In [M. Cojocaru, et.al.], the authors used the replicator dynamics to study the player's optimal strategy over time.
In this research, we are studying the decision making of the population under the effects of media reports and how their decisions contribute to the control of influenza epidemic. For the purpose, we use Replicator equations to quantify the decision making of the population under the effect of disease and mass media reports. We adapted the model given by J. Heffernan.et.al and modified it by adding subcompartments to susceptible and vaccinated compartments. We study the movement of population between these subcompartments under the media effects in the presence of risk of getting an infection, and how these movements contribute to the incidence rate.

Stability and robustness of feedback based optimization for the distribution grid
Marcello Colombino
abstract
Stability and robustness of feedback based optimization for the distribution grid
Marcello Colombino
Feedbackbased online optimization algorithms have gained traction in recent years because of their simple implementation, their ability to reject disturbances in real time, and their increased robustness to model mismatch. While the robustness properties have been observed both in simulation and experimental results, the theoretical analysis in the literature is mostly limited to nominal conditions. In this work, we propose a framework to systematically assess the robust stability of feedbackbased online optimization algorithms. We leverage tools from monotone operator theory, variational inequalities and classical robust control to obtain tractable numerical tests that guarantee robust convergence properties of online algorithms in feedback with a physical system, even in the presence of disturbances and model uncertainty. The results are illustrated via an academic example and a case study of a power distribution system.

Modeling Hessianvector products in nonlinear optimization
Lili Song
abstract
Modeling Hessianvector products in nonlinear optimization
Lili Song
In this paper, we suggest two ways of calculating quadratic models for unconstrained smooth nonlinear optimization when Hessianvector
products are available. The main idea is to interpolate the objective function using a quadratic on a set of points around the current one and concurrently using the curvature information from products of the Hessian times appropriate vectors, possibly defined by the interpolating points. These enriched interpolating conditions form then an affine space of model or inverse model Hessians, from which a particular one can be computed once an equilibrium or least secant principle is defined.
A first approach consists of recovering the Hessian matrix satisfying the enriched interpolated conditions, from which then a Newton approximate step can be computed. In a second approach we pose the recovery problem in the space of inverse model Hessians and calculate an approximate Newton direction without explicitly forming the inverse model Hessian. These techniques can lead to a significant reduction in the overall number of Hessianvector products when compared to the inexact Newton method, although their expensive linear algebra make them only applicable to problems with a small number of variables.

Novel efficient algorithm for riskset sampling in cohort studies of large administrative health databases
Salah Mahmud Christiaan Righolt
abstract
Novel efficient algorithm for riskset sampling in cohort studies of large administrative health databases
Salah Mahmud Christiaan Righolt
Large ($N > 10^6$) cohort studies can be efficiently analyzed with minimal loss of statistical power by sampling a smaller risk set from the study population. The risk set consists of all events of interest that occurred over a specified period of time (observation time) and an appropriate set of randomly selected controls individually matched to each event — typically on length of observation time and other important confounding (lurking) variables. It can be shown that the resulting association statistics estimated by fitting conditional logistic regression models to the matched sample are equivalent to the corresponding estimates using the entire cohort with loss of power not exceeding 10% for most scenarios when the sampling ratio is larger than 4.
However, existing algorithms for riskset sampling are computationally intensive and are often impractical when both the number of events and the cohort size are large which is usually the case in cohort studies of common conditions based on large administrative health databases. They are also not suitable for matching on timevarying variables.
We developed a new algorithm, presently implemented in SAS, to efficiently match on both timevarying and timeindependent variables in large databases. Improved efficiency is obtained by combining a new random sampling technique with a flexible matching procedure based on a userdefined cost function. The new sampling technique also simplifies matching on timevarying variables. We show using simulated datasets that the new algorithm produces valid statistical estimates, and compare its time, memory and CPU performance and resource utilization to those of the most efficient available algorithm (based on SAS’s hash tables).

Detecting Organ Failure in Motor Vehicle Trauma Patients: A Machine Learning Approach
Neil Deshmukh
abstract
Detecting Organ Failure in Motor Vehicle Trauma Patients: A Machine Learning Approach
Neil Deshmukh
Motor vehicle accidents are prevalent throughout the U.S.; over five million crashes occur each year. Typically, patients are appropriately diagnosed only after being transported to nearby medical centers, which currently takes about 715 minutes on average. This project aims to expedite this diagnostic process by training neural networks on Intensive Care Unit (ICU) data to automate the identification of injuries caused by motor vehicle accidents.
Data was aggregated from the Beth Israel Deaconess Medical Center in Boston, Massachusetts via the Medical Information Mart for Intensive Care III (MIMICIII) database, for patients' electrocardiogram (ECG) and respiratory rate reports. Natural Language Processing was used to isolate the patients of interest and classify them based on the area of injury, i.e., any of eight combinations of the three vital organs: brain, heart, and lungs. Upon isolation of the waveform data, noise was filtered via normalization, Butterworth and forwardbackward filter application, and Fast Fourier transformation. The trained Artificial Neural Network (ANN) contained 23 dense layers with the number of neurons decreasing per two layers with ReLU activation and 10\% dropout. Multiclass activation was used for the final layer, in order to allow training for detection of multiple areas of trauma. The trained Convolutional Neural Network (CNN) had 12 convolutions, utilizing batch normalization, six pooling, one flatten, and two dense layers. Both models used AdamOptimizer.
The CNN and ANN produced F1 scores of 0.82 and 0.56 respectively, suggesting the models could potentially perform accurate early detection of injury area and expedite hospital treatment of trauma patients.

Algorithms for Optimal Design and Operation of Networked Microgrids
Harsha Nagarajan
abstract
Algorithms for Optimal Design and Operation of Networked Microgrids
Harsha Nagarajan
In recent years, microgrids, i.e., disconnected distribution systems, have received increasing interest from power system utilities to support the economic and resiliency posture of their systems. The economics of long distance transmission lines prevent many remote communities from connecting to bulk transmission systems and these communities rely on offgrid microgrid technology. Furthermore, communities that are connected to the bulk transmission system are investigating microgrid technologies that will support their ability to disconnect and operate independently during extreme events. In each of these cases, it is important to develop methodologies that support the capability to design and operate microgrids in the absence of transmission over long periods of time. Unfortunately, such planning problems tend to be computationally difficult to solve and those that are straightforward to solve often lack the modeling fidelity that inspires confidence in the results. To address these issues, we first develop a high fidelity model for design and operations of a microgrid that include component efficiencies, component operating limits, battery modeling, unit commitment, capacity expansion, and power flow physics; the resulting model is a mixedinteger quadraticallyconstrained quadratic program (MIQCQP). We then develop an iterative algorithm, referred to as the Model Predictive Control (MPC) algorithm, that allows us to solve the resulting MIQCQP. We show, through extensive computational experiments, that the MPCbased method can scale to problems that have a very long planning horizon and provide high quality solutions that lie within 5% of optimal.

Adaptive randomized rounding in the big parsimony problem
SANGHO SHIM
abstract
Adaptive randomized rounding in the big parsimony problem
SANGHO SHIM
A phylogenetic tree is a binary tree where each node represents a sequence of the states and all the input sequences are represented at the leaf nodes. Given sequences of the states of the same length, the big parsimony problem constructs the most parsimonious phylogenetic tree along with labeling the internal nodes at the maximum parsimony. The big parsimony problem is known to be NPhard. We describe randomized rounding methods that allow us to obtain good solutions. Our first randomized rounding method starts with a fractional optimal solution to the LPrelaxation of an integer linear programming formulation of the big parsimony problem, and repeats randomized rounding based on this fractional solution, which we refer to as fixed randomized rounding without changing the fractional solution. Solutions obtained using the fixed randomized rounding approach are superior to the best solutions obtained using branchandbound with GUROBI and can be obtained quicker. We then describe an adaptive randomized rounding approach where the underlying fractional solution changes based on the best integer solution observed so far and produces solutions that are superior to the fixed randomized rounding approach.


15:30 
16:00 
Coffee break

16:00 
17:30 
Parallel technical sessions
Health, Data and Optimization (Room 91)

Advances in Numerical Optimization 3 (Room 85)

Energy IV (Room 271)

Optimization and OR (Room 241)

The role of pneumonia/influenza in hospital readmissions: Burden and predictive factors via machine learning
Secil Sozuer
abstract
The role of pneumonia/influenza in hospital readmissions: Burden and predictive factors via machine learning
Secil Sozuer
The Hospital Readmissions Reduction Program (HRRP) was established by the 2010 Affordable Care Act. This program requires the Centers for Medicare and Medicaid Services to reduce the reimbursements to hospitals with excessive readmissions. HRRP defines these as an admission for any cause (“allcause readmission”) occurring 30 days or less after an initial (“index”) admission falling into one of several specified categories. Due to this program, readmissions are considered as a quality benchmark for health systems. Here, we will discuss our work and methodology on characterizing the contribution of pneumonia and influenza (P&I) to the burden of readmissions among a population of Medicare Advantage patients aged 65 and over. For the index admission, we apply the same inclusion/exclusion criteria for codes and diseases as the Centers for Medicare and Medicaid Services. We calculate probabilities of P&I readmission after an index admission in the following HRRP categories : Acute myocardial infarction (AMI), 2.0%; congestive heart failure (CHF), 3.0%; chronic obstructive pulmonary disease (COPD), 3.8%; type II diabetes, 1.6%. For comparison, the overall probability of readmission (allcause readmission within 30 days of an allcause index admission) is 13%, and the probability of P&I readmission following an allcause index admission is 1.5%.
We will also present preliminary work on the construction of machine learning algorithms for identifying individuals at risk for P&I readmission. Predictive performance is measured by the area under the curve of the receiver operating curve (AUCROC).
Funding statement: EWT and AC are employees of Sanofi Pasteur. SS is paid by Sanofi Pasteur through an internship program.

Regularized Robust Optimization for TwoStage Stochastic Programming
Mingsong Ye
abstract
Regularized Robust Optimization for TwoStage Stochastic Programming
Mingsong Ye
Twostage stochastic optimization problems with parameter uncertainty are ubiquitous in many applications. Coordinating distributed energy resources with intermittent capacities is an example of this class of problems. Robust optimization is a recently popular approach to deal with uncertainty in optimization. In this method, a feasible solution with the best worstcase performance with respect to an uncertainty set is sought. The typical assumption there is that the decision maker can accurately provide the uncertainty set. In this paper, we first analyze the stability and sensitivity of robust twostage stochastic optimization problems considering general perturbations of the uncertainty set with respect to the Hausdorff distance. We then present a regularized robust optimization using the new concept of regularized uncertainty set and establish the stability of the new approach. Solving the robust counterpart problem with the regularized uncertainty set is discussed.

Enabling A Stochastic Wholesale Electricity Market Design
Yury Dvorkin
abstract
Enabling A Stochastic Wholesale Electricity Market Design
Yury Dvorkin
Efficiently accommodating uncertain renewable and demandside resources in wholesale electricity markets is among the foremost priorities of market regulators in the US, UK and EU nations. However, existing deterministic market designs fail to internalize the uncertainty and their scenariobased stochastic extensions are limited in their ability to simultaneously maximize social welfare and guarantee nonconfiscatory market outcomes in expectation and per each scenario. This paper propose a chanceconstrained stochastic market design, which is capable of producing a robust competitive equilibrium and internalizing uncertainty of the renewable and demandside resources in the price formation process. The equilibrium and resulting prices are obtained for different uncertainty assumptions, which requires using either linear (restrictive assumptions) or secondorder conic (more general assumptions) duality in the price formation process. The usefulness of the proposed stochastic market design is demonstrated via the case study carried out on the 8zone ISO New England testbed.

The Inmate Transportation Problem and its Application in the PA Department of Corrections
Anshul Sharma
abstract
The Inmate Transportation Problem and its Application in the PA Department of Corrections
Anshul Sharma
The Inmate Transportation Problem (ITP) is a common complex problem in any correctional system. We develop a weighted multiobjective mixed integer linear optimization (MILO) model for the ITP. The MILO model optimizes the transportation of the inmates within a correctional system, while considering all legal restrictions and best business practices. We test the performance of the MILO model with real datasets from the Pennsylvania Department of Corrections (PADoC) and demonstrate that the inmate transportation process at the PADoC can significantly be improved by using operations research methodologies.

A further study on the trajectory sensitivity analysis of controlledprescription opioid epidemic dynamical models
Getachew Befekadu
abstract
A further study on the trajectory sensitivity analysis of controlledprescription opioid epidemic dynamical models
Getachew Befekadu
In the context of a mathematical model describing prescription opioid epidemics, we consider a general formalism of trajectory sensitivity study that complements timedomain simulation in the analysis of nonlinear dynamic behaviors of a controlledprescription opioid epidemic model. In particular, we perform linearization around a nonlinear, and possibly showing nonsmooth, trajectory and study further the influence of parameter variations on the prescription opioid epidemic dynamics from the estimated sensitivities analysis with respect to extra controlling or intervening parameters  where such large (or small) trajectory sensitivities generally indicating that these extraparameters have significant (or negligible) effects on the opioid epidemic dynamical behavior. Moreover, the insights we get from such trajectory sensitivities studies are useful in analyzing the underlying influences on system behavior dynamics, as well as assessing the significance or consequence to parameter uncertainties. Finally, as an illustrative example, we presented some simulation results that demonstrated the advantages and usefulness of the proposed trajectory sensitivities study using literaturebased parameters associated with a typical prescription opioid epidemics. (Joint work with Christian Emiyah and Kofi Nyarko, Department of Electrical & Computer Engineering, Morgan State University).

Virtual Network Function placement optimization with Deep Reinforcement Learning
Ruben Solozabal
abstract
Virtual Network Function placement optimization with Deep Reinforcement Learning
Ruben Solozabal
Network Function Virtualization (NFV) introduces a new network
architecture framework that evolves network functions, traditionally
deployed over dedicated equipment, to software implementations that run
on generalpurpose hardware. One of the main challenges for deploying
NFV is the optimal resource placement of demanded network services in
the NFV infrastructure. The virtual network function placement and
network embedding can be formulated as a mathematical optimization
problem concerned with a set of feasibility constraints that express the
restrictions of the network infrastructure and the services contracted.
This problem has been reported to be NPhard, as a result most of the
optimization work carried out in the area has been focused on designing
heuristic and metaheuristic algorithms. Nevertheless, in highly
constrained problems, as in this case, these methods tend to be
ineffective. In this sense, an interesting solution is the use of Deep
Neural Networks to model an optimization policy. The work presented here
extends the Neural Combinatorial Optimization theory by considering
constraints in the definition of the problem. The resulting agent is
able to learn placement decisions exploring the NFV infrastructure
aiming to minimize the overall power consumption. Conducted
experiments demonstrate that the proposed strategy outperforms Gecode
solver when solutions need to be obtained within a limited timeframe.

Coupling Artificial Neural Networks with Chance Constrained Optimization for Voltage Regulation in Distribution Grids
Nikolaos Gatsis
abstract
Coupling Artificial Neural Networks with Chance Constrained Optimization for Voltage Regulation in Distribution Grids
Nikolaos Gatsis
Electricity distribution grids are envisioned to host an increasing number of photovoltaic (PV) generators. PV units are equipped with inverters that can inject or absorb reactive power, a capability that can be crucial for enabling adequate voltage regulation. This talk deals with an optimal power flow (OPF) formulation including probabilistic specifications that nodal voltages remain within safe bound. The chance constraints are approximated by the conditional value at risk. The solution of the resulting OPF yields optimal inverter set points for a set of uncertainty realizations. These are subsequently treated as training scenarios for an artificial neural network corresponding to each PV inverter. The objective of each ANN is to produce the reactive power setpoints corresponding to other realizations of the uncertainty in real time and in a decentralized fashion. The overall design is tested on standard test feeders and the voltage regulation capability is numerically analyzed.

On the (near) Optimality of Extended Formulations for Multiway Cut in Social Networks
SANGHO SHIM
abstract
On the (near) Optimality of Extended Formulations for Multiway Cut in Social Networks
SANGHO SHIM
In the multiway cut problem we are given an edgeweighted graph and a subset of the nodes called terminals, and asked for a minimum weight set of edges that separates each terminal from all the others. When the number k of terminals is two, this is simply the min cut max flow problem, and can be solved in polynomial time. This problem is known to be NPhard as soon as k = 3. Among practitioners, an integer programming formulation of the problem introduced by Chopra and Owen (Mathematical Programming 73 (1996) 730) is empirically known to be strong on a social network, which is usually treelike and almost planar. (We refer to the formulation as EF2 following the authors.) In particular, EF2 is very strong when the edge weights are equally likely and we study the cardinality minimum multiway cut problem (i.e., the edge weights are all 1). We explore the max flow in the cardinality minimum multiway cut problem and show that the cardinality EF2 on a wheel graph has a primal integer solution and a dual integer solution of the same value. We consider a hubspoke network of wheel graphs constructed by adding to the wheel graphs the edges of a hub graph which consists of the hub nodes of the wheel graphs and edges connecting the hub nodes. We assume that every wheel has a terminal hub or a terminal node with three nonterminal neighbors, and show that if the hub graph is planar, the cardinality EF2 on the hubspoke network of the wheel graphs has a primal integer solution and a dual integer solution of the same value. An algorithm developed by Chrobak and Eppstein (Theoretical Computer Science 86 (1991) 243266) is modied and used for the proof.

Optimal design of vaccination catchup programswith incentives
Monica Cojocaru Kevin Fatyas
abstract
Optimal design of vaccination catchup programswith incentives
Monica Cojocaru Kevin Fatyas
Vaccines work by creating an immune response to various illnesses
by stimulating the body's immune system. Immunization overall has yielded positive results, and has proven to be
a costeffective solution to disease prevention that ultimately
improves quality of life.
For newer and relatively more expensive vaccines, the decision to
include these vaccines in publicly funded immunization programs
largely depends on demand for the product and a willingness for
taxpayers to seek out these types of health benefits.
We present here a simple implementation scenario, once a recommendation for the introduction of a vaccine in a
target population is made: how to best invest public funds, to achieve two
concurrent goals: minimize the cost of the program and maximize the overall vaccination coverage for a given
budget?

QuasiNewton Methods for Deep Learning: Forget the Past, Just Sample
Martin Takac
abstract
QuasiNewton Methods for Deep Learning: Forget the Past, Just Sample
Martin Takac
We present two sampled quasiNewton methods for deep learning: sampled LBFGS (SLBFGS) and sampled LSR1 (SLSR1). Contrary to the classical variants of these methods that sequentially build Hessian or inverse Hessian approximations as the optimization progresses, our proposed methods sample points randomly around the current iterate at every iteration to produce these approximations. As a result, the approximations constructed making use of more reliable (recent and local) information and do not depend on past iterate information that could be significantly stale. We also show that these methods can be efficiently implemented in a distributed computing environment.

Learning Methods for Distribution System State Estimation
Ahmed Zamzam
abstract
Learning Methods for Distribution System State Estimation
Ahmed Zamzam
Distribution system state estimation (DSSE) is a core task for monitoring and control of distribution networks. Widely used GaussNewton approaches are not suitable for realtime estimation, often require many iterations to obtain reasonable results, and sometimes fail to converge. Learningbased approaches hold the promise for accurate realtime estimation. This talk presents a datadriven approach to `learn to initialize'  that is, map the available measurements to a point in the neighborhood of the true latent states (network voltages), which is used to initialize GaussNewton. In addition, a novel physicsaware learning model is presented where the electrical network structure is utilized to parametrize a deep neural network. The proposed neural network architecture reduces the number of trainable coefficients needed to realize the mapping from the measurements to the network state by exploiting the separability of the estimation problem. This approach is the first that leverages electrical laws and grid topology to design the neural network for DSSE. We also show that the developed approaches yield superior performance in terms of stability, accuracy, and runtime, compared to conventional optimizationbased solvers.

SimulationBased Optimization of Dynamic Appointment Scheduling Problem with Patient Unpunctuality and Provider Lateness
Secil Sozuer
abstract
SimulationBased Optimization of Dynamic Appointment Scheduling Problem with Patient Unpunctuality and Provider Lateness
Secil Sozuer
Healthcare providers are under growing pressure to improve efficiency due to an aging population and increasing expenditures. This research is designed to address a particular healthcare scheduling problem, dynamic and stochastic appointment scheduling with patient unpunctuality and provider lateness. We consider that the stochasticity is coming from uncertain patient requests, uncertain service duration, patient unpunctuality and provider lateness amount. The aim is to find the optimal schedule start time for the patients in order to minimize the expected cost incurred from patient waiting time, server idle time, and server overtime. By conducting perturbation analysis for the gradient estimation, a Sample Average Approximation (SAA) and a Stochastic Approximation (SA) algorithm are proposed. The structural properties of the sample path cost function and expected cost function are studied. Numerical experiments show the computational advantages of SAA and SA over the mathematical model.


19:00 
21:00 
Student Social
