mprime Seminar Series: Abstracts

Abstracts for the Talks in the MITACS Seminar Series on Camera Networks and Decentralized Processing

Trends in Information Fusion

Abstract: Information Fusion has largely been pushed and researched by the defense organizations to collect, organize, and exploit data. As the world becomes flat and the internet drives communication; the commercial world and the defense organizations are merging in the applications of information fusion. For example, defense applications and operations are in urban environments, collect data on human behavior and culture for maintaining health, and deliver information for inter-country coordination. This talk will explore recent trends in information fusion with examples provided for such things as image fusion for target recognition and medical applications, tracking in urban environments, and use of culture for decision support.

 

Modelling self-organizing networks with a hidden metric

Abstract: Over the last decade, there has been intense research interest in modelling real-world self-organizing networks. The most famous such network is the web graph, whose nodes represent web pages, and edges correspond to links between pages. Current models for self-organizing networks mainly aim to reproduce a number of graph theoretical properties found in real-world networks, such as power law degree distributions or the small world property. On the other hand, experimental and heuristic treatments of real-world networks operate under the tacit assumption that the network is a visible manifestation of an underlying hidden reality. For example, it is commonly assumed that communities in a social network can be recognized as densely linked subgraphs, or that web pages with many common neighbours contain related topics. Such assumptions apply that there is an a priori community structure or relatedness measure of the nodes, which is reflected by the link structure of the network. A common method to represent relatedness of objects is by an embedding in a metric space, so that related objects are placed close together, and communities are represented by clusters of points. After an introduction to self-organizing networks, I will introduce new models where the nodes correspond to points in space, and the stochastic process forming the network is influenced by the position of the nodes in the space. This is joint work with Bill Aiello, Colin Cooper, Jeannette Janssen, and Pawel Pralat.

 

Invariant-based Face Recognition

Abstract: After a brief review of recent striking applications of algebra to engineering and computer science, the currently significant problem of face recognition is addressed. We introduce a new approach to obtaining invariants of Lie groups adapted to this problem and describe its success in implementations.

 

Newtrax Wireless Sensor Network (WSN) solutions for underground mines

Abstract: Underground mines are harsh industrial environments with unique telecom requirements and constraints. Voice, data and location services can improve productivity and safety, but deploying a reliable telecom infrastructure in a continuously expanding network of underground tunnels is an expensive endeavor: tunnels are waveguides that constrain RF propagation, power is not readily available and skilled labor on site is in short supply. This presentation will provide an overview of the various telecom systems deployed underground, their limitations and the complementarity of Newtrax WSN.

 

Spectral Modeling and the Geometry of Color Perception

Abstract: In this talk I will introduce a statistical model of spectra, and describe an approach for describing the space of such models. Using tools from differential geometry we can determine the metric of the space allowing distances and similarity measures to be specified. I will show how good matches to empirical color appearance data can be obtained with this theoretical approach.

 

The Expected Auxiliary Variable Method for Bayesian Computation

Abstract: The expected auxiliary variable method is a general framework for Monte Carlo simulation in situations where quantities of interest are untractable and prevent the implementation of classical methods. The method finds application in situations where marginal computations are of interest, transdimensional move design is difficult in model selection setups, when the normalising constant of a particular distribution is unknown but required for exact computations. I will present several examples of applications of this principle.

 

Information-Driven Inference under Resource Constraints

Abstract: In many inference problems resource constraints preclude the utilization of all available measurements. Consequently, active approaches seek to choose the subset of measurements which maximize the expected information gain for an underlying inference task. By way of example, state estimation in distributed sensor networks presents a fundamental trade?off between the value of information contained in a distributed set of measurements versus the resources expended to acquire them, fuse them into a model of uncertainty, and then transmit the resulting model. Approximate approaches have been proposed that treat a subset of these issues; however, the approaches are indirect and usually consider at most one or two future time steps. I will discuss a method which enables long time?horizon sensor planning in the context of state estimation with a distributed sensor network. The approach integrates the value of information discounted by resource expenditures over a rolling time horizon. Simulation results demonstrate that the resulting algorithm can provide similar estimation performance to that of greedy and myopic methods for a fraction of the resource expenditures.

Additionally, determining the optimal subset of measurements in such cases is known to have exponential complexity. I will present performance guarantees for tractable greedy measurement selection algorithms in sequential estimation problems. It is shown that optimal choices can yield no better than twice the performance of greedy choices for informationtheoretic measures such as mutual information and Fisher information. The bound can be shown to be sharp. Additional on?line computable bounds, often tighter in practice, are presented as well.

This is joint work with Jason L. Williams.

 

Learning Time-Intensity Profiles and Event Detection in Sensor Data

Abstract: Time series of count data are generated in many applications of sensor networks, including vehicular traffic monitoring, security logs, and situational monitoring. These data are accumulated both over time and at spatially distributed sensors, and can be used to estimate the regular, repeated patterns of behavior for prediction and planning problems, and to detect deviations from these patterns (unusual behavior) for real-time situational assessment and response.

Since these data measure aggregate behavior of people, they typically appear non-homogeneous and periodic on a number of scales (daily, weekly, etc.) Additionally, the data are often corrupted by a number of factors, including sensor noise or even outright failure, and periods of unusual underlying activity (traffic accidents, scheduled or unscheduled events, or changes in overall behavior) in the observed process.

We describe a probabilistic model for learning, in an unsupervised fashion, the patterns of typical behavior while detecting and segmenting periods of anomalous or unusual behavior at the sensor. By combining these processes through sensor collaboration, we can construct spatial models for real-time event detection and estimates of occupancy in the region. We evaluate our model on two data sets, monitoring access to a campus building and freeway vehicle count measurements, and show that it is both more accurate and robust than a nonprobabilistic counterpart.

 

Reputation-referral systems in peer-to-peer social networks

Abstract: We begin by motivating and explaining the growth of applications over unstructured peer?to?peer social networks, including but not limited to file?sharing. Such large?scale distributed overlays revisit the basic networking problems of addressing, name resolution and routing. Moreover, many require a certain amount of cooperation and are particularly vulnerable to peers with selfish or malicious intent. Reputation?referral systems are nascent frameworks for incentive and security engineering in these contexts. How these systems perform, particularly how they can deal with the threats of sybil identities and false referrals, will be discussed.

Also, we will explain how traditional hierarchies built on highly reliable (super) peers or network "infrastructure" nodes can help reputation?referral systems evolve and scale. In addition to filesharing systems, application to the problem of voice?over?IP (VoIP) anti?spam will be described.

 

Efficient Distributed Optimization of Team Detection Networks

Abstract: A promising feature of emerging wireless sensor networks is the opportunity for each spatiallydistributed node to measure the state of its local environment and transmit only information relevant to effective global decision-making. An equally important design objective, as a result of each node's finite battery power, is for measurement processing to satisfy explicit constraints on, or perhaps make selective use of, the network's algorithm-related resources.

We formulate this multi-objective design problem within the established team-theoretic Bayesian detection model, which applies when the dominant resource constraints arise from a communication medium with low-rate, unreliable links (e.g., due to uncoded interference, packet erasures). A modest generalization of existing theory establishes when necessary optimality conditions reduce to a convergent iterative algorithm to be executed "offline" (i.e., before measurements are processed "online" subject to the resource constraints). Even so, this offline algorithm has in general exponential complexity in the number of nodes, and its distributed implementation assumes the network is complete (i.e., every node will communicate directly with ALL other nodes in the network).

We identify conditions under which the offline algorithm admits an efficient "message-passing" interpretation, featuring linear complexity in the number of nodes and a natural distributed implementation. Experiments with a simulated network of elementary detectors employ the offline message-passing algorithm to characterize the achievable tradeoffs between global detection performance and network-wide online communication. Our results also expose a design tradeoff between constraining in-network processing to conserve algorithmic resources (per online measurement) and then having to consume resources (per offline organization) to maintain detection performance. (Joint work with Professor Alan S. Willsky, MIT-LIDS.)

 

Inference on network radio channel fading signals: Radio tomographic imaging

Abstract: This talk discusses an application in which the wireless sensor network itself serves as the sensor, performing RF measurements of the physical environment. We use signal processing of radio channel signal strength measurements to take advantage of radio channel shadowing `problem'. We discuss radio tomography, analagous to x?ray computed tomography (CT), using standard commercial RF bands, which uses measured changes in signal strength on many links in a peer?to?peer network to estimate an image of the motion (e.g., people) in an environment. Radio tomography has some particular advantages compared to, for example, video surveillance cameras, or through?wall UWB radar systems. However, signal strength is highly noisy and difficult to model, introducing challenges to tomographic image reconstruction compared to CT systems.

We present statistical models which can be used to characterize the change in signal strength as a function of object position and the `noise' in the fading measurements. Next, we present algorithms for image estimation and object tracking, including experimental results from a wireless sensor network deployed for the purpose of radio tomographic imaging.

 

Wireless Powering for Low-Power Distributed Sensors

Abstract: This talk will overview the research at the University of Colorado in the area of far-field RF wireless powering. The design details of microwavefrequency antennas integrated with rectifiers will be presented, followed by a description of a number of sensor applications. In particular, some detail will be given for a wirelessly powered piezo-electric tomographic sensor array for health monitoring of aircraft wings. The more difficult problem of mobile low-power batteryless wireless sensors will then be discussed with results from hardware demonstrations that require on the order of 10uW/cm^2 of incident power density to collect data and transmit it up to a 10m distance. Finally, some related research in microwave circuits, e.g. design of high-efficiency transmitters, will be overviewed.

 

Virtual Vision for Smart Camera Sensor Network Research

Abstract: A timely challenge for computer vision researchers is to develop smart camera sensors networks capable of performing visual surveillance tasks autonomously, or at least with minimal human intervention. Motivated by the difficulties of carrying out camera network research on a large scale in real world, we propose a new paradigm for camera network research, called Virtual Vision. Virtual vision prescribes visually and behaviorally realistic virtual environments for the design of intelligent surveillance systems and the meaningful experimentation with such systems. Within this paradigm, we demonstrate two smart camera network comprising static and active simulated video surveillance cameras that provides perceptive coverage of a large virtual public space, a train station populated by autonomously self-animating virtual pedestrians. The readily reconfigurable virtual cameras generate synthetic video feeds that emulate those generated by real surveillance cameras monitoring public spaces. Novel multicamera control strategies enable the camera nodes to collaborate both in tracking pedestrians of interest that move across the FOVs of different cameras and in capturing close-up videos of pedestrians as they travel through designated areas.

 

Sensing, Calibrating, and Tracking with Reproducing Kernel Hilbert Spaces

Abstract: The area of machine perception tackles two important tasks: building and training plausible generative models, and inferring quantities of interest from these models given sensor measurements. Unfortunately, for most but the simplest generative models, both of these tasks are computationally intractable, often involving NP-optimization problems. Ironically, one way to overcome this computational intractability is to rely on more powerful and expressive models. The penalty we pay for their computational tractability is that they are prone to overfitting, and thus require much more data to train.

The amount of data required to learn functions in Reproducing Kernel Hilbert Spaces (or equivalently, Gaussian Processes, Splines, or Radial Basis Functions) can be drastically reduced by introducing weak domain knowledge. I present a semi-supervised algorithm that can learn to transform time series given very few labeled examples, and show that it can be used to track articulated or nonrigid bodies in videos. I also present an unsupervised nonlinear system identification method that can track RFID tags and moving targets in uncalibrated ad-hoc sensor networks using no labeled training data at all. These algorithms are extremely fast, and unlike similar existing techniques, do not suffer from local minima.

I will also briefly discuss ongoing work at the Intel lab on speeding up the training of kernel machines (such as SVMs) by several orders of magnitude, and on real-time object instance recognition on handheld devices.

 

Monte-Carlo Simulation for Pose Estimation in Mobile Sensor Networks

Abstract: The problem of pose estimation (localization) is central in mobile robot applications. In the deployment of sensor networks pose estimation can is also important, especially as the number of nodes increases. Finally in multi-robot systems, also called mobile sensor networks, the ability of localization is fundamental. I am going to present the application of particle filters (PF), a Monte-Carlo simulation technique, for increasing the accuracy of localization in the presence of uncertainty. In particular, I would introduce the use of a PF for the localization of a moving robot and compare it to more classical approaches such as the extended Kalman filter (EKF). We will then consider experimental results from a pose-accuracy study of multiple robots using a PF and compare that with the results from a theoretical analysis using an extended Kalman filter estimator. Finally, the use of particle filter for the pose estimation of cameras in a sensor network with the assistance of a mobile robot is going to be discussed. As camera sensor networks become increasingly prevalent, it is very important to present automated solutions for the calibration and pose placement of these cameras.

Furthermore, collaboration between service robots and sensor networks is very important, as the combined resources would increase the effectiveness of any deployed system.

 

Real-time, Safe and Distributed Motion Planning for Teams of Vehicles with Second-Order Dynamics

Abstract: Realistic robotic applications pose a number of important algorithmic challenges. Robots need to plan collision-free motions in real-time and move in partially known environments. This work further addresses certain challenges involving tasks performed by teams of robotic vehicles. Specifically, it is assumed that each vehicle's motion is governed by second-order dynamics. This consideration raises safety concerns for collisions as vehicles cannot change their velocity instantaneously. Moreover, vehicles have limited communication range and thus coordination must be done distributed. In this regard, an asynchronous message passing algorithm (inspired by belief propagation) is employed to coordinate motions so that collision avoidance between vehicles is guaranteed and vehicles retain a communication network if desired. Furthermore, each vehicle generates motions that respect the underlying motion dynamics using a sampling-based planner. Experiments on a distributed simulator built on a cluster of processors confirm the safety properties of the approach in applications such as coordinated exploration. Additionally, experimental data reveal that the distributed coordination algorithm has better scalability properties when compared against typical priority based schemes.

 

Updating Content Over Mobile Social Networks: Models, Algorithms, Open Problems

Abstract: The proliferation of mobile phones and other portable devices with increased storage and local wireless communication capability creates a new communication network environment. In such mobile networks, users can take advantage of local contacts in a peer-to-peer to exchange content, potentially improving the coverage and capacity offered by an infrastructure to delay tolerant applications. However, to live up to this promise, such approach needs to cope with the rate at which online content is updated and accessed today. We study the peer-to-peer dissemination of dynamic content (e.g., news, blogs, mutable documents) over the mobile networks created by users encounters. To this effect, models and measurement results are presented aiming at assessing the scalability, the effect of the social topology of contacts between users, as well as the injection algorithm used for such content dissemination. We prove that these applications remain scalable under general topological conditions, that large networks approach a deterministic regime entirely characterized, and that the optimal injection strategy follows a convex optimization problem. This survey is also an occasion to discuss of some related open problems. (Joint work with S. Ioannidis, N. Ristanovic, J.-Y. Le Boudec, L.Massoulie)

 

Analysis of loopy belief propagation : walk-sums and linear programming approaches

Abstract: Loopy belief propagation and its variants have emerged as a powerful yet enigmatic algorithmic framework that revolutionized a number of applied fields: from error control coding, to computer vision, computational biology, and combinatorial optimization. Belief propagation has a solid foundation for problems which have a certain tree-structure, but its success in graphs with loops is not fully understood even after a forceful barrage of research efforts from machine learning, statistical physics and theoretical computer science communities. In this talk we consider two specific versions of belief propagation: BP for Gaussian graphical models, and BP for the discrete maximum-weight matching problem. We describe the walksum framework which sheds much light on convergence and accuracy properties of Gaussian BP, and a linear-programming analysis of BP for the weighted matching problem. We also describe some applications of these algorithms in sensor networks and remote sensing.

 

Universal Reversible Debugger

Abstract: This talk presents URDB, a universal reversible debugger. URDB supports reversible debugging for: gdb, MATLAB, python, and perl. The standard reversible commands (for example, reverse-next, reverse-step, reverse-continue, reverse-finish) are all provided. The novelty is two-fold: the approach of debugging entire multi-process debugging sessions; and temporal search to automate certain types of debugging. In order to checkpoint entire debugging sessions, DMTCP (Distributed MultiThreaded CheckPointing) has been extended to checkpoint ptrace-based debuggers such as gdb. We believe this is the first time that anyone has been able to checkpoint a gdb session (both gdb and target process). The current showcase for temporal search is efficient expression watchpoints: the ability to execute binary search through a process lifetime to see when an expression becomes bad. For example, a debugging session has discovered that a certain expression has become bad. The expression originally had a good value. One wishes to exhibit the bug. To do so, one must stop in the debugger just before the expression is about to change from good to bad. (There may be many such times when this happens since the bug may express itself many times, but the user requires only to observe one occurrence of this bug.) As an example, perhaps a user-defined testing function has tested an internally represented lattice graph and discovered that a cycle exists. URDB then brings the user of the debugging session to a point in time when the user test function reports that there is no cycle, and that the execution of the next statement will cause the test expression become bad (reports a cycle). In a computation of N statements, URDB needs to execute the test function only O(log N) times. A traditional expression watchpoint implementation by a debugger such as gdb requires the test function to be executed O(N) times. Current work in progress is developing the ability to reversibly execute and hence reversibly debug both MPI (Message Passing Interface for parallel computing) and X-Windows. The work reported here is joint with Kapil Arya, Tyler Denniston, Xin Dong, Artem Polyakov, Praveen S. Solanki, and Ana-Maria Visan.

 

Network Science for Communications

Abstract: Network Science is a newly emerging scientific discipline that investigates common attributes and differences (static and dynamic) among diverse physical and engineered networks, such as social networks, biological networks, Internet, and communication networks. Generally speaking, Network Science is the combination of Graph Theory, Control Theory, and cross-discipline applications. In this talk we focus on the application of Network Science in Communication Networks. One important aspect of data communication networks is the flow of traffic between different nodes. The notion of traffic is not explicitly entered into the basic concepts of Network Science, since the focus of Network Science has been more on social networks. We show one approach to explicitly account for the effect of traffic. We extend the definition of betweenness centrality and introduce traffic-aware betweenness (TAB). We investigate the properties of TAB and show that it is directly related to some important measures in data communication networks. More specifically, we show that the average network utilization is directly proportional to the average normalized traffic-aware betweenness, which is referred to as traffic-aware network criticality (TANC). We investigate the behavior of TANC and verify that TANC is a linear function of point-to-point effective resistances of the graph. As a result, normalized TAB is a convex function of link weights and can be minimized using convex optimization techniques. We study the mathematical properties of the optimization problem and derive useful results to be used in network planning and traffic engineering purposes in wired and wireless domain.

 

Stability and Distributed Power Control in MANETs with Outages and Retransmissions

Abstract: The current talk investigates the effects of hop-by-hop packet loss and retransmissions via ARQ protocols within a Mobile Ad-hoc NET-work (MANET). Errors occur due to outages and a success probability function is related to each link, which can be controlled by power and rate allocation. We first derive the expression for the network's capacity region, where the success function plays a critical role. Properties of the latter as well as the related maximum goodput function are presented and proved. A Network Utility Maximization problem (NUM) with stability constraints is further formulated which decomposes into (a) the input rate control problem and (b) the scheduling problem. Under certain assumptions problem (b) is relaxed to a weighted sum maximization problem with number of summants equal to the number of nodes. This further allows the formulation of a non-cooperative game where each node decides independently over its transmitting power through a chosen link. Use of supermodular game theory suggests a price based algorithm that converges to a power allocation satisfying the necessary optimality conditions of (b). Implementation issues are considered so that minimum information exchange between interfering nodes is required. Simulations illustrate that the suggested algorithm brings near optimal results.

 

Canonical Representations and their Application to Formal Verification and Synthesis

Abstract: This talk will provide a review of several canonical, graph-based representations used in formal verification and synthesis of digital systems. These representations, commonly known as "decision diagrams", include Binary Decision Diagrams (BDD), Binary Moment Diagrams (BMD), and Taylor Expansion Diagrams (TED). The emphasis of this talk is on TED, a recently invented canonical representation for dataflow and computation-intensive designs, used in digital signal processing and computer graphics applications. TED representation is based on a word-level rather than binary decomposition principle, and as such can represent designs on higher levels of abstraction. Several applications of TEDs are discussed, including: equivalence checking, high-level transformations, and high-level synthesis of designs specified on algorithmic and behavioral levels.

 

Signal Processing and Information Theory Perspectives for Collaborative Estimation over Networks

Abstract: Collaborative estimation refers to a scenario in which a collection of networked nodes making local observations exchange information with one another in hopes of better estimating a common underlying sequence of parameters. Of chief interest in this family of problems is the selection of the structure and contents of the messages to be passed between the nodes as well as the local processing necessary to form the local estimates. We will consider this broad collection of problems from several perspectives. We begin from a signal processing/machine learning perspective by describing how sensor network communications can be organized in a manner such that the belief/expectation propagation method of statistical inference dictates an energy efficient message passing distributed estimation scheme yielding optimal Bayesian estimates. We demonstrate how such a sensor network tailored expectation propagation based distributed estimation algorithm can be used to estimate network channel gains with significant efficiency gains over independent estimation through the effective use of network wide prior information. Next, we will consider the collaborative estimation problem from a coding/information theoretic perspective. Of chief interest here is the effect of the limitations of the communications network on the performance of the estimation problem. We first determine the general structure that such codes for collaborative estimation must have, arguing via a series of examples that they should consist of multiple multicast descriptions in order to best interface with the capabilities of the network. The relationship between the collaborative estimation problem and an associated lossy source coding problem is discussed. The associated network lossy source coding problem is shown to have a hybrid nature between two classic problems in multi-terminal information theory: the CEO problem and the multiple descriptions problem. We derive inner and outer bounds for the rate distortion region by extending techniques from these two classic problems. The inner and outer bounds are shown to match in some special cases, yielding the exact rate distortion region and hence a characterization of the most efficient collaborative estimation schemes in these cases. The talk concludes with a long list of directions for future research.

 

On Fluid Solutions of Max-Weight Scheduling with Applications

Abstract: Since the seminal work of Tassiulas-Ephremides'92 on MaxWeight (MW) scheduling policies, there has been considerable effort in developing a deeper understanding of this class of online and stabilizing policies. In this talk we will start by analyzing the fluid model solutions. Concentrating on a one-hop network for ease of exposition, we show that MW-α policies yield unique fluid model solutions for α≥ 1. We then use this result to prove a sample-path large deviations principle for queue-length process under MW-α scheduling. With the LDP result available we then analyze a particular tail asymptote.

 

Exploring the Structure of Online Social Networks: The Role of Positive and Negative Links

Abstract: The information we experience online comes to us continuously over time, assembled from many small pieces, and conveyed through our social networks. This merging of information, network structure, and flow over time requires new ways of reasoning about the large-scale behavior of information networks. I will discuss a set of approaches for tracking information as it travels and mutates in online networks. We show how to capture temporal patterns in the news over a daily time-scale -- in particular, the succession of story lines that evolve and compete for attention. I will also discuss a model to quantify the influence of individual media sites on the popularity of news stories and an algorithm for inferring latent information diffusion networks. This is joint work with Jaewon Yang, Seth Myers, Lars Backstrom, Manuel Gomez-Rodriguez and Jon Kleinberg.

 

Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling

Abstract: The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. It arises in various application domains, including distributed tracking and localization, multi-agent co-ordination, estimation in sensor networks, and large-scale optimization in machine learning. We develop and analyze distributed algorithms based on dual averaging of subgradients, and we provide sharp bounds on their convergence rates as a function of the network size and topology. Our method of analysis allows for a clear separation between the convergence of the optimization algorithm itself and the effects of communication constraints arising from the network structure. In particular, we show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network. The sharpness of this prediction is confirmed both by theoretical lower bounds and simulations for various networks. Our approach includes both the cases of deterministic optimization and communication, as well as problems with stochastic optimization and/or communication.

 

VMTorrent: Quick and Scalable Cloud-based VM Deployments

Abstract: Users increasingly utilize virtual machine (VM) technology to migrate and replicate OS and software amongst networked hosts. Networked execution allows dozens, if not hundreds, of VM replicas to be distributed from a single networked storage location to a large set of physical machines. As these systems expand, network access becomes a major bottleneck, inhibiting their scalability. We have designed and implemented VMTorrent, a system capable of quickly and scalably distributing and executing VM images. VMTorrent’s custom front-end file server allows for VM quick-start in which VMs can execute while their images traverse the network. To this is attached a P2P back-end supporting efficient scaling. Finally, VMTorrent's prefetching algorithms enable smooth streaming execution, balancing the needs of the local client and swarm. Experiments demonstrate that VMTorrent enables performance 11X greater current state-of-the-art and 30X of a standard SAN/NAS for deployments of up to 100 peer machines.

 

A Primer on Various Approaches to Data Association

To thread measurements (well, many call them “hits” or “plots”) of radar, sonar or imaging observations to a credible, smooth and reportable trajectory requires a filter. We’ll discuss those – Kalman, Unscented, particle, etc. – very briefly. But the main topic here arises because one cannot even begin to filter without knowing which hits come from which targets, and which hits are complete nonsense (clutter). When wrapped inside some scheme for such data-association, a filter becomes a tracker. This talk is intended to explain, at a fairly high level, the intuition behind some of the popular tracking algorithms.