Back in R2016b, we introduced tall arrays to facilitate, among other things, processing arbitrarily large datasets. This works nicely for tables or timetables, for example, and works in conjunction with datastores-- repositories for collections of data that are too large to fit in memory.
I'd like to use my POTW pulpit this week to highlight a similar new capability in the image processing domain. In R2021a, we released a "blockedImage object." A blockedImage is an image made from discrete blocks. As in the case of tall(), blockedImage() facilitates processing images or volumes too large to fit into memory. With a blocked image, you can perform processing of arbitrarily large images, in conjunction with blockedImageDatastore() without running out of memory.
Sure, we've had blockproc() to facilitate blockwise processing of images since 2009. So what's the big deal?
blockedImage() has several notable advantages over blockproc():
- blockedImage has native support for multiresolution datasets. You can easily make correspondences between image regions in a pyramidal dataset (e.g., "Where is this feature at the finest resolution level?") We can answer that because we have support for real-world spatial units.
- Conditional processing is much easier with blockedImage. While blockproc works on every block no matter what, blockedImage can skip blocks by using the "BlockLocationSet" parameter of blockedImageDatastore. This can be really useful for class balancing during training, for example.
- You can specify blocks with blockedImage that don't strictly partition the data. You can have overlapping blocks. You can have blocks with gaps between them.
- It's more natural to work with non-image results, such as computing a histogram for each block.
- It's a lot easier to work with 3D--and ND!--data.
- blockedImage's adapters are much nicer to work with... or not. They're totally optional!
- Also the apply() syntax is nicer to work with in blockedImage
- In conjunction with blockeImageDatastores, blockedImage provides an easy way to prepare datasets for training machine learning models.
- You can create arrays of blockedImages, and do batch processing of images, blockwise!
- A "crop" method of blockedImage lets you create virtual cropped subimages--with no data copy. This is memory efficient, and can be useful if, for example, your image contains multiple logical resolutions or entities--like multiple tissue samples on a single slide image.
However you choose to refer to them, processing images too large to fit in memory has never been easier, more powerful, or more flexible! Your legacy code that uses blockproc() will continue to work, but for your newer analyses--especially if you are training deep learning models--I encourage you to give blockedImage() a try!
As always, I welcome your thoughts and comments.
댓글을 남기려면 링크 를 클릭하여 MathWorks 계정에 로그인하거나 계정을 새로 만드십시오.