Actin Microfilaments Detector.
Our task was to design and implement a software application able to process sets of images of biological structures within a cell, discriminate and classify various types of structures and present the results in a flexible, meaningful and open format.
The user would have to be able to adjust the engine parameters on the fly in order to maximize accuracy; as such, the algorithms had to be very computationally efficient.
While one typical application was known at the start of the process, the software had to be designed flexible and modular in order to allow reusability for other projects.
The application was implemented as an ImageJ plug-in written in Java.
It consists of 4 logical modules - user interface, image preprocessor, classification engine and reporting module.
The author hopes that only the classification engine will have to be changed when switching to new projects. The image preprocessor should remain unchanged. Minor changes may be needed for the user interface and the reporting module since we may need to display and report information in ways which cannot be fully anticipated.
The image preprocessing algorithms reuse code written in (2) and is based on research published in (1) and (3).
We used ImageJ capabilities for importing various types of time series of images and for visual data representation of the results.
The results can be saved to a file in a tabular format which can be later imported in Excel for statistical interpretation and further processing.
The user has a lot of flexibility about choosing the report content.
Image preprocessing consists of four steps:
1. Image normalization
we determine the global minimum and maximum of all intensity values occurring in the series of images (Imin, Imax)
all pixel intensities are then normalized as (i-Imin) / (IMax-Imin)
the result is a matrix of floating point intensity values ranging between 0 and 1
this approach preserves intensity variations across frames (timepoints).
However, this does not protect against fluctuations in background intensity across frames
We may need to correct this issue with comparing a background ROI between a global average and the frame average.
2. Image restoration
The goal here is to perform corrections for imperfections in the image.
We deal with 2 types of errors:
- modulations of the background intensity due to non-uniform illumination
- discretization noise from the digital camera
Corrections are easy to do as the particles are much smaller than background variations and much larger than discretization noise.
We can either perform a filtering in the frequency domain or a convolution in the spatial domain using a kernel describing the 2 types of perturbations.
3. Estimation of the particle locations
It is done by finding local intensity maxima in the image.
A pixel is taken as an approximate location of a particle if no other pixel is brighter within a w radius.
From al the candidates, we keep the brightest n% (usually 0.2-1%) note that the intensity percentile is computed individually for each frame, which protects against drifting (caused by bleaching, for example).
Criticism: this approach is unable to reject noise - for instance a pixel much brighter than it should be, located in an otherwise bright neighbourhood.
Solution: in the next processing step
4. Refining the particle location
Now that we have the brightest pixel of a particle candidate, what we really want is to find its center of mass.
This will reduce the standard deviation of the position measurement.
We assume that the found local maximum (white pixel in the sample images) is near the true geometric center of the particle.
We compute the center of mass (magenta pixel) of the pixels within a w radius around our candidate (red circle).
If the distance between the candidate and the computed center is > than 0.5 pixel, we move the position of our candidate 1 pixel towards the computed center and repeat the process.
- Particle refining algorithm (step 1 of 2):
- Particle refining algorithm (step 2 of 2):
At this point we consider the particles detection completed.
The classification engines act as expert systems, where we used preexistent knowledge about the image content in order to discriminate between the present entities and to improve the discrimination accuracy.
The application was used up to now in 2 research projects and 2 different classification engines were developed, one for each.
Two channel classification engine.
The first project was designed to help finding relationships between the size and motility of actin comets (formations known as "tails") and a triggering factor present on the cell surface at the "grow" end of the comet (formations named "heads").
The "heads" information was presented on the red channel, the “tails” information (actin) on the green.
The algorithm individually identifies the heads and tails candidates in each channel and tries to correlate them based on proximity.
- Original image - merged red and green channels.:
- Results (heads, tails, matched pairs) drawn over the green channel image.:
The original image is on the left, the heads are represented in red while the actin is represented in green. The right image shows the results of the algorithm superposed over the green (actin) channel.
- the matched heads and tails are represented in the results channel as red (heads) and yellow (tails) circles; visual inspection show that the match is pretty accurate in this case
- some heads have no associated tail (represented with small green circles in the result channel)
- there is a large blob of "non-comet" actin in the middle of the image ("orphan" blue circles in the result channel)
- the heads are not co localized with the tails (visible in the original image as well - co localization would have implied yellow, not red). The distance between the heads and tails center of mass (light blue line) may be used to predict the near term particle trajectory and velocity.
The user can modify some of the parameters used within the algorithm in order to maximize accuracy; the results of these changes are displayed in real time.
The various available reports show the position and intensity of the particles, the paired heads and tails and the distance between them and also the false hits (orphaned heads or tails).
One channel classification engine.
The second engine was designed to identify and discriminate between 3 different formations within a cell: long actin fibers (part of the cytoskeleton), actin comets and circular blobs of actin. Only one channel of information is available.
We used the name of "adaptive thresholding of boundary intensity" for this algorithm.
What we do is to sample pixel intensity on a circle surrounding the particle and use the resulting distribution to categorize particles.
- Plotting of brightness across particles circumference:
We sample in 10 degree increments. Thresholding fluctuates from one particle to the other and is based on min/max intensity of sampled particle
Adjustment of results is based on distances between min/max intensity on the boundary and the particle max intensity - hence the "adaptive" part in the name.
- Identified particles represented over the original image.:
A more general engine is in the final stages of development, where we compute spatial moments (averages of intensity weighted by some function of distance) of the particles present in the image and try to find clouds of distribution which could be used for particle discrimination.
Some of the variables we measure are: intensity (m0), variance (m2), symmetry, kurtosis, orientation, eccentricity.
1. In the beginning.
This tutorial describes the steps needed for the 2-phase analize - which is the most complex situation. For the single-channel case, skip step 3.
Place the files belonging to the 2 channels in 2 different directories, for example red and green. Please remember that the images must be of the same size for batch processing.
Also, you need an equal number of images for the both channels.
2. Detect heads:
2.1 Do File / Import / Image sequence..., navigate to the heads dir and open the 1st image in the series.
2.2. Press OK in the Sequence Options dialog.
2.3. The 1st image opens, at this point you may want to use the Magnifying glass tool to enlarge it, usually 150% to 300% should be enough.
2.4. Open the plugin from the menu: Plugins / Head detector / Head detector
2.5. The Select mode dialog opens. Depending on version, it may look a bit different. Check 1st checkbox (One channel detection
). Do not check the last 2 boxes (debug). Choose whatever you see fit from the middle group (at this time I would recommend choosing "show shape" regardless of the rest, if that option is present). Press OK
2.6. The detector dialog box opens. Choose a radius and a threshold. The default values may be a good place to start. Press Preview to see the results. Do not press OK or cancel!
If you're unhappy with the results, modify radius and threshold until you get the results you desire.
2.7. You may want to press the navigation scrollbar to see how other images on the series look like.
2.8 Press Save to save a text file with the results. Choose a new directory for this.
3. Detect tails:
3.1 Do File / Import / Image sequence..., navigate to the tails dir and open the 1st image in the series.
3.2. Press OK in the Sequence Options dialog.
3.3. The 1st image of the series opens, at this point you may want to use the Magnifying glass tool to enlarge it, we recommend using the same magnification as in the previous step.
3.4. Open the plugin from the menu: Plugins / Head detector / Head detector
- The Select mode dialog opens. Depending on version, it may look a bit different. Check the 2nd checkbox (Identify and match tails). Do not check the last 2 boxes (debug). Choose whatever you see fit from the middle group. (Show shape is not to be used at this point) Press OK.
- Navigate to the folder with the heads results and open the 1st file.
3.6. The detector dialog box opens. Choose a radius and a threshold. Good choices are a slightly larger radius than the one used for heads and a significantly larger threshold (2x the one used for heads recommended).
Press Preview to see the results. Do not press OK or cancel!
If you're unhappy with the results, modify radius and threshold until you get the results you desire.
3.7. You may want to press the navigation scrollbar to see how other images on the series look like.
3.8 Press Save to save a text file with the results. Choose a new directory for this.
4. Use the results.
It may be useful to import the results files in Excel for further processing.
Report file formats
Here is the report file format for the red channel:
- "frame" keyword, frame number, global min, global max
Body (repeated sequence, 1 row per frame):
- x-coord, y-coord, intensity, m2, m3x, m3y, m4x, m4y, orientation, eccentricity, type
- frame number - frame number; note that images start with 1 while frames start with 0
- global min, max - numbers used internally when absolute values are needed
- x-coord, y-coord - particle coordinates
- intensity - particle intensity, the sum of the intensities of all pixels within the circle
- m2 - variance
- m3x, m3y, m4x, m4y, orientation, eccentricity - these are part of an ongoing project, do not use them
- type - the shape of the particle, can be:
- comet (elongated asymmetrical object; note that the beginning and the end of a fiber, if well defined, may be detected as comet)
- fiber (long symmetrical object)
- circular object
- unknown (the engine cannot decide on the shape of the object, most likely different formations partially overlap)
This plug-in requires a working version of ImageJ 1.38 or later.
Download the plug-in archive from here
After download, extract it to the plugins
folder of your ImageJ installation. It should appear in the plug-in list next time you run ImageJ.
1. I.F. Sbalzarini, P. Koumoutsakos
Feature point tracking and trajectory analysis for video imaging in cell biology
Institute of Computational Science, ETH Zurich, Switzerland
2. Guy Levy
Original ImageJ algorithm
Computational Biophysics Lab, ETH Zurich
3. Crocker, J.C., Grier, D.G.,
1996. Methods of digital video microscopy for colloidal studies. J. Coll. Interface Sci. 179, 298–310.
- 10 Jul 2008