With the very first version of the Image Processing Toolbox, released more than 22 years ago, you could convert a gray-scale image to binary using the function im2bw.
I = imread('rice.png'); imshow(I) title('Original gray-scale image')
bw = im2bw(I); imshow(bw) title('Binary image')
You can think of this as the most fundamental form of image segmentation: separating pixels into two categories (foreground and background).
Aside from the introduction of graythresh in the mid-1990s, this area of the Image Processing Toolbox has stayed quietly unchanged. Now, suddenly, the latest release (R2016a) has introduced an overhaul of binarization. Take a look at the release notes:
imbinarize, otsuthresh, and adaptthresh: Threshold images using global and locally adaptive thresholds
The toolbox includes the new function, imbinarize, that converts grayscale images to binary images using global threshold or a locally adaptive threshold. The toolbox includes two new functions, otsuthresh and adaptthresh, that provide a way to determine the threshold needed to convert a grayscale image into a binary image.
What's up with this? Why were new functions needed?
I want to take advantage of this functionality update to dive into the details of image binarization in a short series of posts. Here's what I have in mind:
- The state of image binarization in the Image Processing Toolbox prior to R2016a. How did it work? What user pains motivated the redesign?
- How binarization works in R2016a
- Otsu's method for computing a global threshold
- Bradley's method for computing an adaptive threshold
Mostly, I haven't written this yet. If there is something particular you'd like to know, tell me in the comments, and I'll try to work it in.
Get the MATLAB code
Published with MATLAB® R2016a
Comments are closed.
6 CommentsOldest to Newest
Steve, I am really looking forward to your new posts about binarization. I would like to understand the details of Otsu’s and Bradley methods.
Can’t wait for your series of post Steve. I’ve been unhappy with MatLab’s way of dealing with binarization for so long that I’m really curious to see what really evolved.
Going into details of both methods is a good way to point out what their limitations are but if it suits you, I’d be glad to know as well if you ever considered direct implementation (as functions) of automated thresholding methods (entropy methods, inter variance classes…).
Sure one can find quite a lot of those using MatLab Exchange but I rather think that the Image Processing Toolbox should propose its own set of functions for automated thresholding. I often use ImageJ of FiJi for quick “prototyping” and I find their Automated Thresholding plugin very handy in that it gives me a quick overview of what could work with a given type of image.
Pierre—Thanks for your input. By “direct implementation of automated threshold methods,” do you mean something different than a function such as graythresh, which implements Otsu’s method?
I’m eager to read this upcoming series! Looking at graythresh as it stands in 2016a, it appears to be basically a wrapper for otsuthresh (and otsuthresh was essentially cut-and-pasted from the old graythresh). I am curious why someone would ever want to use otsuthresh over graythresh now. Is it only for the case where someone wants a different quantization of the image histogram?
I’d like to see those articles written. However I rarely use those functions. They seem to work well only for high contrast, bimodal images. One thing I’d like to see you add is the triangle threshold. This is useful far more often in my experience than Otsu. Why? It’s great for skewed histograms. When doing particle sizing or getting distributions of almost anything you can imagine, you rarely have a bimodal Gaussian humps. It’s more common to have a skewed unimodal histogram – like a log normal or Rayleigh/Cauchy – looking distribution. Just look at almost any particle size distribution of areas, circularities (which I’d like to see added to regionprops), perimeters, aspect ratios, or whatever. The triangle threshold works great for these skewed unimodal distributions. I know of a published case where they used locally adaptive Otsu also with Sobel edge detection to enhance and flatter handwriting on a varying background. I have code for that if you want. So, like your rice example, locally adaptive thresholds are good. Usually what I did was to try to flatten the background (like with adapthisteq) and then use a global threshold. I’ve never heard of the Bradley method so I’d like to learn about what kinds of images it excels at. Another thing I don’t like about the built in functions is that they always work with images in the normalized range 0-1 when in the real world, virtually no one has those (unless they make them because they have to in order to use your functions). It would be nice if you had a version that worked with gray level directly. Perhaps if you could pass in a ‘GrayLevel’ or ‘Normalized’ option to the functions.
Steve — Yes, I meant something other than graythresh and Otsu’s method, which as Mark mentioned works well for high contrast images. There are other methods than Otsu’s than may work better on less contrasted images (again, I fully agree with Mark’s comment on skewed unimodal histograms). I’m definitely not a specialist there but I’d love to see other thresholding functions appear in MatLab (Entropy, Triangle, Moments for instance).
Then again, I’ll have to try out the new adaptthresh function to see how it works on my usually “not that much contrasted images”.