Some of you may have already noticed a new product in R2011a: Computer Vision System Toolbox. You may also have noticed that Video and Image Processing Blockset disappeared! These are not unrelated, as we took the blockset, added new computer vision algorithms, and changed the name to Computer Vision System Toolbox. Here’s a list of what’s new:
- extractFeatures function for creating an array of feature vectors (descriptors) based on interest points within an image
- matchFeatures function for finding the best matches between two arrays of feature vectors (descriptors)
- Visualization of epipolar geometry for stereo images using epipolarLine, isEpipoleInImage, and lineToBorderPoints functions
- estimateUncalibratedRectification function for calculating projective transformations to rectify stereo images
- Video segmentation based on Gaussian Mixture Models using ForegroundDetector System object
- YCbCr video format support for ToVideoDisplay block and DeployableVideoPlayer System object
Calling this product a toolbox also allows us to clarify and highlight the MATLAB capabilities in the product that we’ve had since R2010a. Some of these algorithms overlap with Image Processing Toolbox, but provide support for C code generation and fixed-point modeling. Others are unique to Computer Vision System Toolbox, including these:
So, now we have one product for use in both MATLAB and Simulink that supports the design and simulation of computer vision and video processing systems. It contains MATLAB functions, MATLAB System objects, and Simulink blocks. You can learn more about these capabilities by looking at the documentation for Computer Vision System Toolbox.
Get the MATLAB code
Published with MATLAB® 7.12
9 CommentsOldest to Newest
How do these functions compared with OpenCV’s routines? For example, the opticalflow algorithms. Does the toolbox provides any dense flow routines, like Black & Anadan optical flow?
The toolbox provides Horn-Schunck and Lucas-Kanade non-pyramidal methods. Black-Anand algorithm is not included, although you can find Black’s own MATLAB implementation of it here.
which types of features? SIFT, SURF, FAST, other?
Ya—We include FAST, Harris, Shi and Tomasi (minimum eigenvalue). We are actively exploring other methods that might be included in future releases.
I am trying to use vision.ForegroundDetector and am having some problems. One thing I was not able to figure out is how exactly are k-gaussians initialized. Currently when I use vision.ForegroundDetector I specify my own learning rate, the MinimumBackgroundRatio, NumGaussians to 3, and an Initial Variance.
When I begin stepping using this foreground detector how are the gaussians in the mixture model initialized. As in, for the very first frame is there just one gaussian, and then each subsequent frame another gaussian is added (assuming that the pixel value doesn’t match the existing gaussians) and this keeps happening until you build up k Gaussians. Is the very first Gaussian in the mixture initialized to the first value a pixel takes on?
Also, let’s say that I “train” my detector on 200 frames, is there a way to save the detector’s state such that in the future I can load it again? Currently when I try to save the handle to foreground detector and load it again it doesn’t keep the mixtures around.
Is there any interface to see the mixtures kept for each pixel, at any time frame t. It is hard for me to figure out which parameters are best without being able to see the k-gaussians’ parameters for each pixel.
Any feedback you have would be appreciated.
Mike—I’d like to suggest that you contact MathWorks support for detailed product support questions.
Does this toolbox provide any routines for tracking good features? like “cvGoodFeaturesToTrack” in OpenCV?
A note from MathWorks technical marketing on the previous comment:
The “cvGoodFeaturesToTrack” function in OpenCV finds corners using the method proposed by Shi and Tomasi. We provide similar capabilities in “vision.CornerDetector” through the “Minimum eigenvalue” method. In addition, the toolbox also supports SURF feature detection with the function “detectSURFFeatures”, which we recently added in release 2011b. We are actively looking at approaches to feature tracking and appreciate any input. Thanks!
How can I get the code used Computer Vision System Toolbox.
e.g Haris corner detector, extractFeature and match feature.