Research -
Fusion and Inference in Surveillance Networks


Our global goal is to design a multi-tier multi-modal surveillance network and develop the mathematical models and decentralized learning algorithms that permit the tracking of objects (and people) and the detection of anomalous events. In doing so, we will also need to study the theoretical stability of these distributed inference tools. Our proposed research addresses the following interrelated tasks:

  1. Sensor/Camera Management and Control
  2. Distributed and Delay-Tolerant Particle Filtering
  3. Gossip and Consensus Algorithms for Decentralized Inference

Sensor/Camera Management and Control

Sensor and camera management can be interpreted as a stochastic planning and control problem. In [C31, N2], we proposed novel efficient reversible jump Markov chain Monte Carlo (MCMC) algorithms that are capable of addressing formidably complex control and active sensing problems. In [C29], we derived an exact EM algorithm for control problems with linear Gaussian dynamics that involve a reward function that can be modeled by a mixture of Gaussians. In [J8,BC1], we developed an alternative control approach based on response surfaces and showed that it works well on a diverse range of stochastic control and planning problems.

Distributed and Delay-Tolerant Particle Filtering

We proposed a new algorithm for efficient distributed particle filtering [C39]. In this algorithm, each participating sensor node approximates its local posterior using a Gaussian, and then a gossip algorithm is used to perform fusion and construct an approximation to the global posterior. We developed a new procedure for tracking targets when some measurements are delayed due to the wireless network [C40]. The procedure estimates the informativeness of delayed measurements and only processes those that have the potential to significantly change the filtering distribution.

Gossip and Consensus Algorithms for Decentralized Inference

Multiscale Gossip: We proposed a new algorithm for fast decentralized averaging based on performing gossip at multiple scales in a network and analyzed the convergence rates [C38].

Accelerated Distributed Averaging: We created an approach to accelerate distributed averaging through the use of a very small amount of local node memory [C34, J11]. The approach leads to much faster convergence rates; our theoretical analysis characterizes how the convergence rates depend on the network topology and size.

Greedy Gossip with Eavesdropping (GGE): We published work describing and theoretically characterizing a greedy gossip algorithm that for practical network architectures has a significantly reduced convergence time compared to previous techniques [C30, C33, J12].

Selective Gossip: We also identified a selective gossip algorithm that facilitates distributed non-linear approximation; we proved that the algorithm converged and demonstrated that it reduces communication requirements [C37].


  1. Sensor/Camera Management and Control
  2. Decentralized Particle Filters for Tracking Multiple Objects
  3. Gossip and Consensus Algorithms for Decentralized Inference

Sensor and Camera Management and Control

A primary task in surveillance networks is allocation of resources. The decision of which sensors or cameras to activate is important when devices have energy constraints. If the sensors are adjustable or mobile, for example cameras with pan-tilt and zoom capability and sensors such as laser range finders mounted on UAVs, then there is an additional challenge of deciding where to focus attention and how to coordinate measurement.

Controlling the Sensors – Stochastic Control Approaches: Sensor management can be interpreted as a stochastic planning and control problem. A recently proposed formulation this problem as one of parameter estimation for suitable artificial statistical models has led to the adoption of inference algorithms. At the algorithmic level, the focus has been on developing Expectation-Maximization (EM) algorithms. In [C8], we made the crucial observation that the stochastic control problem can be reinterpreted as one of trans-dimensional inference. With this new understanding, we are able to propose a novel reversible jump Markov chain Monte Carlo (MCMC) algorithm that is more efficient than its EM counterparts. Moreover, it enables us to carry out full Bayesian policy search, without the need for gradients and with one single Markov chain. The new approach involves sampling directly from a distribution that is proportional to the reward and, consequently, performs better than classic simulations methods in situations where the reward is a rare event.
One of the key theoretical problems underlying stochastic control is establishing the existence and approximating optimal controls of non-linear diffusions. In [J9] we showed how to control, optimally, linear-quadratic non-linear diffusions with no time discretization error, or in complex problems, how to compute the optimal control with a time discretization error, where other methods do not work well. We used a combination of sequential Monte Carlo (SMC) methods and exact simulation of diffusions methodology. We illustrated our methods on both high and low dimensional problems, and establish that our approach outperforms standard Monte Carlo methods.

Camera Control – Active Learning: A very important issue in the management of cameras is where to look in a scene in order to infer a good model of the location of targets. The camera cannot observe the entire scene, so it must move its focus, and it is important to plan the associated motions, in conjunction with solving the computer vision problems of object detection, tracking and recognition. We have addressed this problem in [C19], developing a sequential decision-making algorithm for gaze planning. In [C9], we developed a novel approach to address the more general, foundational problem of active learning in the presence of discrete observations.

Decentralized Particle Filters for Tracking Multiple Objects

In order to provide relevant information to a city simulator, we must determine how many objects are present in a scene and what is the nature of their motion. In the past we have developed strategies for tracking objects based on a single image sequence, but the problem is considerably more complex when dealing with multiple sensors and requires the development of new mathematical techniques. Devices must cooperate to identify individual objects and track their movement. Fusing together the information provided by multiple sensors is a challenging task. Transfer of the raw data to a central inference site is unattractive due to the high energy cost of wasteful communication. We are exploring distributed particle filtering techniques that only exchange a concise probabilistic representation of the local information state (how many objects there are and where they are located).

Distributed Particle Filtering: We have conducted a theoretical analysis of the stability of the leader node particle filter [C22]. The conference paper was one of three nominations for Best Student Paper Award at the ISIF 2008 International Conference on Fusion. The leader node particle filter is a partially distributed approach to tracking in a sensor network, in which the node performing the particle filter computations (the leader node) changes over time. The primary advantage is that the position of the leader node can follow the target, improving the efficiency of data collection. When the leader node changes, the particle filter must be communicated to the new leader. Exchanging a complete representation can require many thousands of bits, so the filtering distribution is more coarsely approximated, either by transmitting only a subset of particles or by training a parametric model. The concern is that this approximation could lead to instability and eventually tracking failure. The major contribution of our work is the development of bounds on the error of the leader node particle filter. In contrast to previous bounds, which grow exponentially over time, the bounds indicate convergent behaviour, as commonly observed in simulations. In [[C35]C35, [JS1]] we extended this analysis to the more general problem of particle filtering with approximation steps (subsampling or parametric approximation) and provided tighter bounds.

Tracking Multiple Targets: Tracking multiple targets with uncertain target dynamics is a difficult problem, especially with nonlinear state and/or measurement equations. With multiple targets, representing the full posterior distribution over target states is not practical. The problem becomes even more complicated when the number of targets varies, in which case the dimensionality of the state space itself becomes a discrete random variable. The probability hypothesis density (PHD) filter, which propagates only the first-order statistical moment (the PHD) of the full target posterior, has been shown to be a computationally efficient solution to multitarget tracking problems with a varying number of targets. The integral of PHD in any region of the state space gives the expected number of targets in that region. With maneuvering targets, detecting and tracking the changes in the target motion model also become important. The target dynamic model uncertainty can be resolved by assuming multiple models for possible motion modes and then combining the mode-dependent estimates in a manner similar to the one used in the interacting multiple model (IMM) estimator. In [J4], we proposed a multiple-model implementation of the PHD filter, which approximates the PHD by a set of weighted random samples propagated over time using sequential Monte Carlo (SMC) methods. The resulting filter can handle nonlinear, non-Gaussian dynamics with uncertain model parameters in multisensory multitarget tracking scenarios. In [J3], we addressed the problem of tracking multiple spawning targets with multiple finite-resolution sensors and presented a new algorithm for measurement-to-track association with possibly unresolved measurements.

Model Uncertainty: In most solutions to target tracking, it is generally assumed that the state transition and measurement models are known a priori. However, there are situations where the model parameters or the model structure itself are not known a priori or are known only partially. In these scenarios, standard estimation algorithms like the Kalman filter and the extended Kalman Filter (EKF), which assume perfect knowledge of the model parameters, are not accurate. In [J5], we addressed the nonlinear state estimation problem with possibly non-Gaussian process noise in the presence of a certain class of measurement model uncertainty. We showed that the problem can be considered as a special case of maximum-likelihood estimation with incomplete data and proposed an EM-type algorithm that casts the problem in a joint state estimation and model parameter identification framework. The proposed procedure, called EM-PF (expectation-maximization particle filter) algorithm, is used to solve a highly nonlinear bearing-only tracking problem, where the model structure is assumed unknown a priori. We demonstrated that the algorithm is capable of modeling the observations and accurately tracking the state vector.

Radio-Frequency (RF) Tomography: We are exploring methods for localization and tracking of objects using measurements of signal strength in wireless networks. RG tomography methods are imaging algorithms that take measurements of received signal strength (RSS) on the wireless links connecting many nodes and reconstruct a map of signal attenuation. We have conducted initial work in [[C28],[C32]] where we have used sparse models to improve estimation performance.

Gossip and Consensus Algorithms for Decentralized Inference

Gossip and consensus algorithms are robust decentralized computational methods for networked systems. Nodes iteratively perform local updates based on information they have received from their immediate neighbors. Taking the number of iterations to the limit, all nodes in the network converge to the same value or function. Gossip algorithms can be useful for decentralized control and flocking of UAVs since they are resilient to time-varying topologies and communication errors.

Greedy Gossip with Eavesdropping (GGE): We have developed a new algorithm for reaching consensus through local gossip-type communication in a wireless surveillance network [[C20],[C24]. Our new algorithm, Greedy gossip with Eavesdropping (GGE), is a randomized gossip algorithm that exploits the broadcast nature of wireless communications to converge rapidly on grid-like network topologies without requiring that nodes know their geographic locations. When a node decides to gossip, rather than choosing one of its neighbors randomly, it greedily chooses to gossip with the neighbor whose values are most different from its own. We have proved that GGE converges to the average consensus on connected network topologies and outperforms standard randomized gossip in its speed of convergence. We have characterized the rate of convergence in terms of a topology-dependent constant analogous to the second-largest eigenvalue characterization for previous randomized gossip algorithms [[C30],[C33],[CJ12]. Simulations demonstrate that the convergence rate of GGE is superior to existing average consensus algorithms such as geographic gossip.

Accelerated Consensus: We have published an approach to accelerate local, linear iterative network algorithms asymptotically achieving distributed average consensus [[J7],[C18]]. Our work focuses on the class of algorithms in which each node initializes its “state value” to the local measurement and then at each iteration of the algorithm, updates this state value by adding a weighted sum of its own and its neighbors’ state values. Provided the weight matrix satisfies certain convergence conditions, the state values asymptotically converge to the average of the measurements, but the convergence is generally slow, impeding the practical application of these algorithms. In order to improve the rate of convergence, we propose a novel method where each node employs a linear predictor to predict future node values. The local update then becomes a convex (weighted) sum of the original consensus update and the prediction; convergence is faster because redundant states are bypassed. The method is linear and poses a small computational burden. We conducted a theoretical analysis of this algorithm in [[C34],[J11]], and provided a characterization of its convergence behaviour which demonstrated that the improvement over standard consensus algorithms scales with the size of the network.


  1. Tracking of Multiple Objects in Video Sequences
  2. Anomaly Detection using Minimum Volume Sets
  3. Controlling the Cameras: Stochastic Control Approaches
  4. Calibration: Consensus Algorithms and Bootstrapping

Tracking of Multiple Objects in Video Sequences

We have experimented with particle filtering algorithm for velocity estimation in image sequences. In particular, we considered the situation where there is little prior knowledge, so we cannot parameterize the exact appearance of the tracked objects or provide a good initialization to the motion model. We introduced a feature extraction mechanism based on a modified Hough transform, which strives to identify the pixels that belong to moving objects. We described a modified algorithm for particle filtering that uses not only the traditional "tracking" particles, but injects "search" particles, which enable the detection of new objects that enter the scene. We treat the velocity of each object as an unknown parameter, and estimate this parameter through an evidence maximization procedure. Results based on synthetically generated data and low-resolution camera footage indicates that the modified particle filter performs well. It detects moving objects soon after they enter the scene and assigns tracking resources to them. The velocity of moving objects is estimated on-line, and the positions and sizes of objects are determined by identifying spikes in the posterior density. In future work, we will explore the incorporation of a feedback loop in the filter, which will allow it to learn the areas in the scene where motion occurs. Relevant publications: T1.

Anomaly Detection using Minimum Volume Sets

We have developed an approach for sparse on-line kernel density estimation that can be used for anomaly detection. The technique achieves good accuracy while minimizing computational and storage requirements, and it can be implemented in a distributed fashion. The methodology is particularly appropriate for application to non-stationary data sources, when the underlying probability distribution generating the data is slowly evolving over time, and it becomes necessary to track "normality" rather than estimate it from a block of data. We have applied the approach to determine its effectiveness in detecting traffic jams in sequences of camera images of highways. Relevant publications: C2.

Controlling the Cameras: Stochastic Control Approaches

We have developed a new approach to solve infinite horizon stochastic control problems. This consists in reinterpreting the control problem as an artificial and non-standard statistical inference problem. This approach allows us to reuse all the computational tools developed for inference in the framework of control. In particular for linear Gaussian models with non-quadratic rewards, we have shown that it is possible to find easily an locally optimal controller using the celebrated Expectation-Maximization algorithm. For general non-linear non-Gaussian dynamic models, we have developed a trans-dimensional Markov chain Monte Carlo algorithm to solve the control problem. These algorithms have been tested on toy examples and have demonstrated performance equivalent to state-of-the-art approaches. We are currently working on more realistic applications. Relevant publications: C7, J1.

We have also been working on the problem of optimal observer trajectory planning for partially observed models using a receding horizon control. The problem consists of optimizing dynamically the location of some sensors so as to optimize according to a relevant metric the tracking abilities of our system. We have developed a methodology combining the particle filter and two-time scale stochastic approximation. We are currently extending in various directions this methodology. First we want to follow a Bayesian approach and bypass the use of a stochastic gradient algorithm. Second we want to address the case of multiple sensors. Currently, our approach is applicable to these scenarios but scales relatively poorly. Relevant Publications: C3, C5, C8.

Calibration: Consensus Algorithms and Bootstrapping

Distributed average consensus is the task of calculating the average of a set of measurements made at different locations through the exchange of local messages. The goal is to avoid the need for complicated networks with routing protocols and topologies, but to ensure that the final average is available at every node. This can be an important task in camera networks, forming the foundation for localization and calibration approaches. We are currently exploring two important aspects of distributed consensus. The first involves quantized communication; when the measured data is high dimensional or voluminous, or there are severe bandwidth or energy constraints, it becomes necessary to quantize the exchanged data values to reduce the communication overhead (this is the case in camera networks). We have developed an algorithm for quantized consensus based on probabilistic quantization. Our algorithm has the attractive features of guaranteed convergence to a consensus point (one of the quantization values). Due to its probabilistic nature, the algorithm does not always converge to the same value, but the expectation of the values is equal to the original average. We have characterized the convergence rate and the variance of the error after consensus is achieved. In our other research into consensus, we have investigated approaches to accelerate the rate of convergence to consensus through bypassing redundant states. We achieve this by incorporating a linear predictive step in the algorithm; our approach uses only local information but results in a significant performance improvement. Relevant Publications: C4, J7, N1, T2.

We have also developed a procedure for "bootstrapping" a particle filter that is learning the observation model while we do the tracking. This is important in situations where the precise nature of the sensor devices is unknown, or where there is the potential for calibration errors to develop over time. Relevant Publications: C1.