The Kalman filter is useful for tracking different types of moving objects. It was originally invented by Rudolf Kalman at NASA to track the trajectory of spacecraft. At its heart, the Kalman filter is a method of combining noisy (and possibly missing) measurements and predictions of the state of an object to achieve an estimate of its true current state. Kalman filters can be applied to many different types of linear dynamical systems and the “state” here can refer to any measurable quantity, such as an object’s location, velocity, temperature, voltage, or a combination of these.

In a previous article, I showed how face detection can be performed in MATLAB using OpenCV. In this article, I will combine this face detector with a Kalman filter to build a simple face tracker that can track a face in a video.

If you are unfamiliar with Kalman filters, I suggest you read up first on how alpha beta filters work. They are a simplified version of the Kalman filter that are much easier to understand, but still apply many of the core ideas of the Kalman filter.

## Face tracking without a Kalman filter

The OpenCV-based face detector can be applied to every frame to detect the location of the face. Because it may detect multiple faces, we need a method to find the relationship between a detected face in one frame to another face in the next frame — this is a combinatorial problem known as data association. The simplest method is the nearest neighbour approach, and some other methods can be found in this survey paper on object tracking. However, to greatly simplify the problem, the tracker I have implemented is a single face tracker and it assumes there is always a face in the frame. This means that every face that is detected can be assumed to be the same person’s face. If more than one face is detected, only the first face is used. If no faces are detected, a detection error is assumed. The MATLAB code below will detect the face location in a sequence of images and output the bounding box coordinates to a CSV file.

function detect_faces(imgDir, opencvPath, includePath, outputFilename)

fullfile(opencvPath, 'bin\cxcore100.dll'), @proto_cxcore);
fullfile(opencvPath, 'bin\cv100.dll'), ...
fullfile(opencvPath, 'cv\include\cv.h'), ...
'alias', 'cv100', 'includepath', includePath);
fullfile(opencvPath, 'bin\highgui100.dll'), ...
fullfile(opencvPath, 'otherlibs\highgui\highgui.h'), ...
'alias', 'highgui100', 'includepath', includePath);

% Create memory storage

cvStorage = calllib('cxcore100', 'cvCreateMemStorage', 0);

% Get the list of images
imageFiles = dir(imgDir);
detections = struct;

h = waitbar(0, 'Performing face detection...'); % progress bar

% Open the output CSV file
fid = fopen(outputFilename, 'w');
fprintf(fid, 'filename,x1,y1,x2,y2');

for i = 1:numel(imageFiles)

if imageFiles(i).isdir; continue; end
imageFile = fullfile(imgDir, imageFiles(i).name);

cvImage = calllib('highgui100', ...
if ~cvImage.Value.nSize
end

% Perform face detection
cvSeq = calllib('cv100', 'cvHaarDetectObjects', cvImage, cvCascade, cvStorage, 1.1, 2, 0, libstruct('CvSize',struct('width',int16(40),'height',int16(40))));

% Save the detected bounding box, if any (and if there's multiple
% detections, just use the first one)
detections(i).filename = imageFile;
if cvSeq.Value.total == 1
cvRect = calllib('cxcore100', ...
'cvGetSeqElem', cvSeq, int16(1));
fprintf(fid, '%s,%d,%d,%d,%d', imageFile, ...
cvRect.Value.x, cvRect.Value.y, ...
cvRect.Value.x + cvRect.Value.width, ...
cvRect.Value.y + cvRect.Value.height);
else
fprintf(fid, '%s,%d,%d,%d,%d', imageFile, 0, 0, 0, 0);
end

% Release image
calllib('cxcore100', 'cvReleaseImage', cvImage);
waitbar(i / numel(imageFiles), h);

end

% Release resources

fclose(fid);
close(h);
calllib('cxcore100', 'cvReleaseMemStorage', cvStorage);

end


We can then run our face detector and generate an output file, faces.csv, like this:

imgDir = 'images';
opencvPath = 'C:\Program Files\OpenCV';
includePath = fullfile(opencvPath, 'cxcore\include');
detect_faces(imgDir, opencvPath, includePath, 'faces.csv');


In the video below, I have run this script on the FGnet Talking Face database (which is free to download) and displayed the bounding boxes overlayed on the image sequence. You can download a copy of the faces.csv file that was used to generate the video from here.

The bounding box roughly follows the face, but its trajectory is quite noisy and the video contains numerous frames where the bounding box disappears because the face was not detected. The Kalman filter can be used to smooth this trajectory and estimate the location of the bounding box when the face detector fails.

## Kalman filtering: The gritty details

The Kalman filter is a recursive two-stage filter. At each iteration, it performs a predict step and an update step.

The predict step predicts the current location of the moving object based on previous observations. For instance, if an object is moving with constant acceleration, we can predict its current location, $\hat{\textbf{x}}_{t}$, based on its previous location, $\hat{\textbf{x}}_{t-1}$, using the equations of motion.

The update step takes the measurement of the object’s current location (if available), $\textbf{z}_{t}$, and combines this with the predicted current location, $\hat{\textbf{x}}_{t}$, to obtain an a posteriori estimated current location of the object, $\textbf{x}_{t}$.

The equations that govern the Kalman filter are given below (taken from the Wikipedia article):

1. Predict stage:
1. Predicted (a priori) state: $\hat{\textbf{x}}_{t|t-1} = \textbf{F}_{t}\hat{\textbf{x}}_{t-1|t-1} + \textbf{B}_{t} \textbf{u}_{t}$
2. Predicted (a priori) estimate covariance: $\textbf{P}_{t|t-1} = \textbf{F}_{t} \textbf{P}_{t-1|t-1} \textbf{F}_{t}^{T}+ \textbf{Q}_{t}$
2. Update stage:
1. Innovation or measurement residual: $\tilde{\textbf{y}}_t = \textbf{z}_t - \textbf{H}_t\hat{\textbf{x}}_{t|t-1}$
2. Innovation (or residual) covariance: $\textbf{S}_t = \textbf{H}_t \textbf{P}_{t|t-1} \textbf{H}_t^T + \textbf{R}_t$
3. Optimal Kalman gain: $\textbf{K}_t = \textbf{P}_{t|t-1} \textbf{H}_t^T \textbf{S}_t^{-1}$
4. Updated (a posteriori) state estimate: $\hat{\textbf{x}}_{t|t} = \hat{\textbf{x}}_{t|t-1} + \textbf{K}_t\tilde{\textbf{y}}_t$
5. Updated (a posteriori) estimate covariance: $\textbf{P}_{t|t} = (I - \textbf{K}_t \textbf{H}_t) \textbf{P}_{t|t-1}$

They can be difficult to understand at first, so let’s first take a look at what each of these variables are used for:

• ${\mathbf{x}_{t}}$ is the current state vector, as estimated by the Kalman filter, at time ${t}$.
• ${\mathbf{z}_{t}}$ is the measurement vector taken at time ${t}$.
• ${\mathbf{P}_{t}}$ measures the estimated accuracy of ${\mathbf{x}_{t}}$ at time ${t}$.
• ${\mathbf{F}}$ describes how the system moves (ideally) from one state to the next, i.e. how one state vector is projected to the next, assuming no noise (e.g. no acceleration)
• ${\mathbf{H}}$ defines the mapping from the state vector, ${\mathbf{x}_{t}}$, to the measurement vector, ${\mathbf{z}_{t}}$.
• ${\mathbf{Q}}$ and ${\mathbf{R}}$ define the Gaussian process and measurement noise, respectively, and characterise the variance of the system.
• ${\mathbf{B}}$ and ${\mathbf{u}}$ are control-input parameters are only used in systems that have an input; these can be ignored in the case of an object tracker.

Note that in a simple system, the current state ${\mathbf{x}_{t}}$ and the measurement ${\mathbf{z}_{t}}$ will contain the same set of state variables (only ${\mathbf{x}_{t}}$ will be a filtered version of ${\mathbf{z}_{t}}$) and ${\mathbf{H}}$ will be an identity matrix, but many real-world systems will include latent variables that are not directly measured. For example, if we are tracking the location of a car, we may be able to directly measure its location from a GPS device and its velocity from the speedometer, but not its acceleration.

In the predict stage, the state of the system and its error covariance are transitioned using the defined transition matrix ${\mathbf{F}}$, and can be implemented in MATLAB as:

function [x,P] = kalman_predict(x,P,F,Q)
x = F*x; %predicted state
P = F*P*F' + Q; %predicted estimate covariance
end


In the update stage, we first calculate the difference between our predicted and measured states. We then calculate the Kalman gain matrix, ${\mathbf{K}}$, which is used to weight between our predicted and measured states and is adjusted based on a ratio of prediction error ${\mathbf{P}_{t}}$ to measurement noise ${\mathbf{S}_{t}}$.

Finally, the state vector and its error covariance are then updated with the measured state. It can be implemented in MATLAB as:

function [x,P] = kalman_update(x,P,z,H,R)
y = z - H*x; %measurement error/innovation
S = H*P*H' + R; %measurement error/innovation covariance
K = P*H'*inv(S); %optimal Kalman gain
x = x + K*y; %updated state estimate
P = (eye(size(x,1)) - K*H)*P; %updated estimate covariance
end


Both the stages only update two variables: ${\mathbf{x}_{t}}$, the state variable, and ${\mathbf{P}_{t}}$, the prediction error covariance variable.

The two stages of the filter correspond to the state-space model typically used to model linear dynamical systems. The first stage solves the process equation:

$\displaystyle \mathbf{x}_{t+1}=\mathbf{F}\mathbf{x}_{k}+\mathbf{w}_{k}$

The process noise ${\mathbf{w}_{k}}$ is additive Gaussian white noise (AWGN)with zero mean and covariance defined by:

$\displaystyle E\left[\mathbf{w}_{t}\mathbf{w}_{t}^{T}\right]=\mathbf{Q}$

The second one is the measurement equation:

$\displaystyle \mathbf{z}_{t}=\mathbf{H}\mathbf{x}_{t}+\mathbf{v}_{t}$

The measurement noise ${\mathbf{v}_{t}}$ is also AGWN with zero mean and covariance defined by:

$\displaystyle E\left[\mathbf{v}_{t}\mathbf{v}_{t}^{T}\right]=\mathbf{R}$

## Defining the system

In order to implement a Kalman filter, we have to define several variables that model the system. We have to choose the variables contained by ${\mathbf{x}_{t}}$ and ${\mathbf{z}_{t}}$, and also choose suitable values for ${\mathbf{F}}$, ${\mathbf{H}}$, ${\mathbf{Q}}$ and ${\mathbf{R}}$, as well as an initial value for ${\mathbf{P}_{t}}$.

We will define our measurement vector as:

$\displaystyle \mathbf{z}_{t}=\left[\begin{array}{cccc} x_{1,t} & y_{1,t} & x_{2,t} & y_{2,t}\end{array}\right]^{T}$

where $\left(x_{1,t},\, y_{1,t}\right)$ and $\left(x_{2,t},\, y_{2,t}\right)$ are the upper-left and lower-right corners of the bounding box around the detected face, respectively. This is simply the output from the Viola and Jones face detector.

A logical choice for our state vector is:

$\displaystyle \mathbf{x}_{t}=\left[\begin{array}{cccccc} x_{1,t} & y_{1,t} & x_{2,t} & y_{2,t} & dx_{t} & dy_{t}\end{array}\right]^{T}$

where ${dx_{t}}$ and ${dy_{t}}$ are the first-order derivatives. Other vectors are also possible; for example, some papers introduce a “scale” variable, which assumes that the bounding box maintains a fixed aspect ratio.

The transition matrix ${\mathbf{F}}$ defines the equations used to transition from one state vector ${\mathbf{x}_{t}}$ to the next vector ${\mathbf{x}_{t+1}}$ (without taking into account any measurements, ${\mathbf{z}_{t}}$). It is plugged in to the process equation:

$\displaystyle \mathbf{x}_{t+1}=\mathbf{F}\mathbf{x}_{k}+\mathbf{w}_{k}$

Let’s look at some basic equations describing motion:

\displaystyle \begin{aligned}x & =dx_{0}t+\frac{1}{2}d^{2}x\cdot\Delta T^{2}\\ dx & =dx_{0}+d^{2}x\cdot\Delta T\end{aligned}

We could express this system using the following recurrence:

\displaystyle \begin{aligned}x_{t+1} & =x_{t}+dx_{t}\cdot\Delta T+\frac{1}{2}d^{2}x_{t}\cdot\Delta T^{2}\\ dx_{t+1} & =dx_{t}+d^{2}x_{t}\cdot\Delta T\end{aligned}

These same equations can also be used to model the ${y_{t}}$ variables and their derivatives. Referring back to the process equation, we can thus model this system as:

$\displaystyle \left[\begin{array}{c} x_{1,t+1}\\ y_{1,t+1}\\ x_{2,t+1}\\ y_{2,t+1}\\ dx_{t+1}\\ dy_{t+1}\end{array}\right]=\left[\begin{array}{cccccc} 1 & 0 & 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 & 0 & 1\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1\end{array}\right]\left[\begin{array}{c} x_{1,t}\\ y_{1,t}\\ x_{2,t}\\ y_{2,t}\\ dx_{t}\\ dy_{t}\end{array}\right]+\left[\begin{array}{c} d^{2}x_{t}/2\\ d^{2}y_{t}/2\\ d^{2}x_{t}/2\\ d^{2}y_{t}/2\\ d^{2}x_{t}\\ d^{2}y_{t}\end{array}\right]\times\Delta T$

The process noise matrix ${\mathbf{Q}}$ measures the variability of the input signal away from the “ideal” transitions defined in the transition matrix. Larger values in this matrix mean that the input signal has greater variance and the filter needs to be more adaptable. Smaller values result in a smoother output, but the filter is not as adaptable to large changes. This can be a little difficult to define, and may require some fine tuning. Based on our definition of the measurement noise ${\mathbf{v}_{t}}$ above, our process noise matrix is defined as:

\displaystyle \begin{aligned}\mathbf{Q} & =\left[\begin{array}{cccccc} \Delta T^{4}/4 & 0 & 0 & 0 & \Delta T^{3}/2 & 0\\ 0 & \Delta T^{4}/4 & 0 & 0 & 0 & \Delta T^{3}/2\\ 0 & 0 & \Delta T^{4}/4 & 0 & \Delta T^{3}/2 & 0\\ 0 & 0 & 0 & \Delta T^{4}/4 & 0 & \Delta T^{3}/2\\ \Delta T^{3}/2 & 0 & \Delta T^{3}/2 & 0 & \Delta T^{2} & 0\\ 0 & \Delta T^{3}/2 & 0 & \Delta T^{3}/2 & 0 & \Delta T^{2}\end{array}\right]\times a^{2}\\ & =\left[\begin{array}{cccccc} 1/4 & 0 & 0 & 0 & 1/2 & 0\\ 0 & 1/4 & 0 & 0 & 0 & 1/2\\ 0 & 0 & 1/4 & 0 & 1/2 & 0\\ 0 & 0 & 0 & 1/4 & 0 & 1/2\\ 1/2 & 0 & 1/2 & 0 & 1 & 0\\ 0 & 1/2 & 0 & 1/2 & 0 & 1\end{array}\right]\times10^{-2}\end{aligned}

where ${\Delta T=1}$ and ${a=d^{2}x_{t}=d^{2}y_{t}=0.1}$.

The measurement matrix ${\mathbf{H}}$ maps between our measurement vector ${\mathbf{z}_{t}}$ and state vector ${\mathbf{x}_{t}}$. It is plugged in to the measurement equation:

$\displaystyle \mathbf{z}_{t}=\mathbf{H}\mathbf{x}_{t}+\mathbf{v}_{t}$

The variables ${x_{t}}$ and ${y_{t}}$ are mapped directly from ${\mathbf{z}_{t}}$ to ${\mathbf{x}_{t}}$, whereas the derivative variables are latent (hidden) variables and so are not directly measured and are not included in the mapping. This gives us the measurement matrix:

$\displaystyle \mathbf{H}=\left[\begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\end{array}\right]$

The matrix ${\mathbf{R}}$ defines the error of the measuring device. For a physical instrument such as a speedometer or voltmeter, the measurement accuracy may be defined by the manufacturer. In the case of a face detector, we can determine the accuracy empirically. For instance, we may find that our Viola and Jones face detector detects faces to within 10 pixels of the actual face location 95% of the time. If we assume this error is Gaussian-distributed (which is a requirement of the Kalman filter), this gives us a variance of 6.5 pixels for each of the coordinates, so the measurement noise vector is then given by:

$\displaystyle \mathbf{v}=\left[\begin{array}{cccc} 6.5 & 6.5 & 6.5 & 6.5\end{array}\right]^{T}$

The errors are independent, so our covariance matrix is given by:

$\displaystyle \mathbf{R}=\left[\begin{array}{cccc} 6.5^{2} & 0 & 0 & 0\\ 0 & 6.5^{2} & 0 & 0\\ 0 & 0 & 6.5^{2} & 0\\ 0 & 0 & 0 & 6.5^{2}\end{array}\right]=\left[\begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{array}\right] \times 42.25$

Decreasing the values in ${\mathbf{R}}$ means we are optimistically assuming our measurements are more accurate, so the filter performs less smoothing and the predicted signal will follow the observed signal more closely. Conversely, increasing ${\mathbf{R}}$ means we have less confidence in the accuracy of the measurements, so more smoothing is performed.

The estimate covariance matrix ${\mathbf{P}}$ is a measure of the estimated accuracy of ${\mathbf{x}_{t}}$ at time ${t}$. It is adjusted over time by the filter, so we only need to supply a reasonable initial value. If we know for certain the exact state variable at start-up, then we can initialise ${\mathbf{P}}$ to a matrix of all zeros. Otherwise, it should be initialised as a diagonal matrix with a large value along the diagonal:

$\displaystyle \mathbf{P}=\left[\begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1\end{array}\right]\times\epsilon$

where ${\epsilon\gg0}$. The filter will then prefer the information from the first few measurements over the information already in the model.

## Implementing the face tracker

The following script implements the system we have defined above. It loads the face detection results from CSV file, performs the Kalman filtering, and displays the detected bounding boxes.

% read in the detected face locations
fid = fopen('detect_faces.csv');
detections = textscan(fid, '%[^,] %d %d %d %d', 'delimiter', ',');
fclose(fid);

% define the filter
x = [ 0; 0; 0; 0; 0; 0 ];
F = [ 1 0 0 0 1 0 ; ...
0 1 0 0 0 1 ; ...
0 0 1 0 1 0 ; ...
0 0 0 1 0 1 ; ...
0 0 0 0 1 0 ; ...
0 0 0 0 0 1 ];
Q = [ 1/4  0   0   0  1/2  0  ; ...
0  1/4  0   0   0  1/2 ; ...
0   0  1/4  0  1/2  0  ; ...
0   0   0  1/4  0  1/2 ; ...
1/2  0  1/2  0   1   0  ; ...
0  1/2  0  1/2  0   1  ] * 1e-2;
H = [ 1 0 0 0 0 0 ; ...
0 1 0 0 0 0 ; ...
0 0 1 0 0 0 ; ...
0 0 0 1 0 0 ];
R = eye(4) * 42.25;
P = eye(6) * 1e4;

nsamps = numel(detections{1});
for n = 1:nsamps

% read the next detected face location
meas_x1 = detections{2}(n);
meas_x2 = detections{4}(n);
meas_y1 = detections{3}(n);
meas_y2 = detections{5}(n);
z = double([meas_x1; meas_x2; meas_y1; meas_y2]);

% step 1: predict
[x,P] = kalman_predict(x,P,F,Q);

% step 2: update (if measurement exists)
if all(z > 0)
[x,P] = kalman_update(x,P,z,H,R);
end

% draw a bounding box around the detected face
imshow(img);
est_z = H*x;
est_x1 = est_z(1);
est_x2 = est_z(2);
est_y1 = est_z(3);
est_y2 = est_z(4);
if all(est_z > 0) && est_x2 > est_x1 && est_y2 > est_y1
rectangle('Position', [est_x1 est_y1 est_x2-est_x1 est_y2-est_y1], 'EdgeColor', 'g', 'LineWidth', 3);
end
drawnow;

end

The results of running this script are shown in the following video:

Clearly we can see that this video has a much smoother and more accurate bounding box around the face than the unfiltered version shown previously, and the video no longer has frames with missing detections.

## Closing remarks

In the future, I aim to write an article on the extended Kalman filter (EKF) and unscented Kalman filter (UKF) (and the similar particle filter). These are both non-linear versions of the Kalman filter. Although face trackers are usually implemented using the linear Kalman filter, the non-linear versions have some other interesting applications in image and signal processing.

1. Learned about kalman filter from your blog… which is new to me.

hi alister,

Awesome work. I was wondering if all those can be applied in real time, i.e., plug in a USB camera and apply the haar classifier version and Kalman filtered version face detection in real time?

If you think so, do you have any suggestion on how to implement that?

Thanks!
Xi

• Yes – Haar-like object detectors are very efficient and are well suited to real-time face detection. I’d recommend you implement it in pure C++ using OpenCV rather than mixing MATLAB code as I have done. OpenCV contains an easy to use Kalman filter (see cvKalmanCorrect and cvKalmanPredict) and some example code of how to stream video from a webcam.

Hello,

I am new in kalman filter development, so I apologize in advance for my questions.

To develop a multi-tracking system using kalman filter, Do I need to have one kalman filter for each object? \
I would need a data association step to relate each detection to the right track. And maybe I have to define different motion models to track objects of different type (for example: pedestrians, trucks, bikes, …). Am I right?

• Yes, that’s correct. For the data association step, a simple approach is to associate the objects between frames that have the minimal Euclidean distance, with thresholds for detecting new objects entering the scene or old objects leaving the scene. For more robust data association, particularly in more complex scenarios where objects may occasionally occlude each other, take a look at the Multiple Hypothesis Tracker – I plan to write an article on this in the future.

hello,
im a student and really new to kalman filter.
can you expakin to me about “associate the objects between frames that have the minimal Euclidean distance”??
it’s hard enough for me to understand.

• For each video frame, we are detecting the locations of all of the faces in the image. This means that to track individual faces, we need a method to determine whether face X detected in frame 1 is the same as face Y detected in frame 2. A very simple way to solve this problem is to say that if face X is within a certain distance of face Y, we assume they are the same face. Otherwise, we assume they are two different faces. This Wikipedia article gives a good description of the technique (it’s about radar tracking, but the concepts are the same as in face tracking).

do we need that “minimal Euclidean distance” for tracking multi-person for counting people?
thanks,

very cool, thanks for sharing, I’m also working on an optimized code for tracking 3D objects in noisy environment. I’ll share it here as soon as I publish it.

hi i’m working on a project extraction of pulmonary fissures using kalman filters want to know is it possible to detect it using this filter?
will it be efficient?
is it possible to do with kalman filters? if means how to do it?

• Sorry, I don’t have any experience with medical imaging so can’t really advise whether the Kalman filter is appropriate for your project.

6. Hi there.
Excellent example, thanks. I’ve download and tested it a bit, seems to work. One thing is a bit odd to me- F is not used in kalman_update function, although it is one of the inputs. Can you explain why?
Another thing- is there any reason to calculate [x1,y1,x2,y2] simultaneously? I’ve implemented a single state Kalman filter (a simplified case f what you’ve described here, also described at Wikipedia), and used it four times. This way I can use it for any inter frame translation operators vector such as [x,y,Scale,Rotation, Occlusion ratio, etc…) without changing the Kalman filter model. Will be happy to hear your opinion on that.
Best regards,
Nikolay.

• Of course you’re correct, the F parameter is not used and can be removed from the kalman_update function – that’s a typo.

To your second question, if your state variables are independent, they don’t need to be included in the same Kalman filter. Using multiple filters is perfectly reasonable (and should generate the same results).

Hi,
One question ..
For Q, you have …”this gives us a variance of 6.5 pixels”
then in the covariance matrix calculation, the 6.5 is squared again.
I thought the variance is already squared, am I missing something ???
Thanks,
Tien

8. Hi alister, I wanted to know how your code can be extended to work with continuous streaming of video instead of the saved images?If it is not possible to work with continuous video,then could you explain the process of framing the videos into images since I am unable to work with other videos apart from the sample data files you have provided. Thank you and keep up the wonderful work.

• Take a look at this example of using OpenCV to capture live video from a camera. The cvQueryFrame function used in the example returns IplImage objects, so you should be able to just substitute the cvLoadImage function in my code with this function.

9. I installed OpenCv1.1pre version,the one which you have written the code in. I have VS2008 and 2010 Express. In the initial step it returns error ??? Error using ==> loadlibrary at 279 Microsoft Visual C++ 2005 or 2008 is required to use this feature.

Error in ==> opencv_test_prototype at 4

Now,I do not know if we have to extract opencv.sln via VS since I found in wiki that we have to install however,the steps mentioned there are incompatible with Vs2010 and the earlier version of opencv1.1. Please let me know how to resolve the issue.Thank you

hi
I need particle filter Matlb code
can you help me?

11. Do we have to give the coordinates of face for each and every frame? And till what value the loop has to run?

i have a problem in the exécution at th kalman filter matlab’s.he said that:??? Error using ==> imread at 315
File “images\franck_00000.jpg” does not exist.

Error in ==> kf at 47

Hi Alister,

you wrote fantastic article, which was very useful for my master thesis. I’d like to quote you, but I couldn’t find your full name, can you post it ot me please?

Thank you very much for the article and for your reply as well.

Ales

It was very useful for me,thank you a lot,but I have a little question and this is about the way you have reached the variance of 6.5 for each coordinates,would you please explain it a little more?

Thank you

hello Alister,
I’m really very new to kalman filter.Can you help me please to get a very special part of this artical in opencv, the part is the detection of a face tracking object .
detections = textscan(fid, ‘%[^,] %d %d %d %d’, ‘delimiter’, ‘,’);
and
nsamps = numel(detections{1});
for n = 1:nsamps

% read the next detected face location
meas_x1 = detections{2}(n);
meas_x2 = detections{4}(n);
meas_y1 = detections{3}(n);
meas_y2 = detections{5}(n);
thank you.

Hello Alister,
Thank you for this interesting artical, but i have some problem:
Can you tell me please how you are calculated the coordinates of the measure (images.csv) file because it is the major problem of mine.
Best regards,
hb2012

17. Good tutorial. But there is a slight problem. I was implementing this filter but due to some reason the K matrix elements were becoming Inf in Matlab and then NaN. Also the resulting tracking was worse than when face detection was applied straight away to each frame.

other than kalman filter is their any other filters for object detection

hi ramaraju,
there are other filters too for object detection, like particle filter

Hi, I’ve followed your code to follow pedestrians, kalman works well except for cases where the object tracked changed its speed suddenly!, so how can I add more speed to the model, or fix the acceleration value?

BR
Esteban

sir my self rakesh mandal, student of nit rourkela, recently i work in robust kalman filtering for time delayed system, now i want to work some real time application of kalman filter, my guide show these blog, i am trying to run your program, but error shows ??? Input argument “opencvPath” is undefined.

Error in ==> detect_faces at 9

how i run these program, plz send me the exact code.

21. very good guide how Kalman Filter can be used together with Matlab. This is one of the reason that I choose to add this guide to my article http://www.intorobotics.com/how-to-detect-and-track-objects-using-matlab/