April 2014


Options: Use Monospaced Font
Show HTML Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
multipart/alternative; boundary="------------070903020100010004090909"
Lisa Nolder <[log in to unmask]>
Mon, 7 Apr 2014 09:16:48 -0400
Lisa Nolder <[log in to unmask]>
text/plain (2308 bytes) , text/html (29 kB)
*_Notice and Invitation_*

Department of Systems Engineering and Operations Research
Volgenau School of Engineering, George Mason University

*Keith W. DeGregory*

Bachelor of Science, Systems Engineering, United States Military 
Academy, 1997*
*Master of Arts, Management, Webster University, 2002

Masters of Science, Operations Research, Massachusetts Institute of 
Technology, 2007**

*An Approximate Dynamic Program for Allocating *

*Federal Air Marshals in Real-Time Under Uncertainty

Wednesday, April 30, 2014, 2:00 pm
Engineering Bldg., Room 2091

Rajesh Ganesan, Chair
Alexander Brodsky

Andrew Loerch

Lance Sherry


The Federal Air Marshal Service provides front line security in homeland 
defense by protecting civil aviation from potential terrorist attacks. 
  Unique challenges arise in maximizing effective deployment of a 
limited number of air marshals to cover the risk posed by potential 
terrorists on nearly 30,000 daily domestic and international flights. 
  Some risk presents in a stochastic nature (i.e., a last minute ticket 
sale to suspicious individual).  Response to near-real time risk is not 
possible for pre-scheduled air marshal deployments.  This dissertation 
proposes the formation of a quick reaction force to specifically address 
the stochastic risk and a means for near-real time force allocation to 
optimize risk coverage.

The dynamic allocation of reactionary air marshals would involve 
sequential decision making under uncertainty with limited lead time. 
This dissertation proposes implementing an approximate dynamic program 
(ADP) to assist schedulers dynamically allocating air marshals in 
near-real time. Approximate dynamic programming is a form of reinforced 
learning that seeks optimal decisions by incorporating future impacts 
rather than optimizing short-term rewards.  The system is modeled as a 
Markov decision process.  Due to the many variables and environment 
complexity, explicit storage of all states and their values is not 
possible.  A value function approximation scheme mitigates the 
scalability issue by alleviating the need for storage of state values.  
The study demonstrates that marshal allocation in near-real time is 
possible using an ADP methodology and results in improved coverage of 
stochastic risk over the myopic approach or pre-scheduling.