## Tuesday, March 3, 2015

### Multiscale localisation in 3D and confocal images deconvolution: Howto

Following last week's impulse, here is a tutorial about the subtleties of multiscale particle localisation from confocal 3D images.

For reference, this has been published as

A novel particle tracking method with individual particle size measurement and its application to ordering in glassy hard sphere colloids.
Leocmach, M., & Tanaka, H.
Soft Matter (2013), 9, 1447–1457.
doi:10.1039/c2sm27107a
ArXiV
The corresponding code is available at Sourceforge, including the IPython notebook corresponding to the present tutorial.

# Particle localisation in a confocal image¶

First, import necessary library to load and display images

import numpy as np
from scipy.ndimage import gaussian_filter
from matplotlib import pyplot as plt
from colloids import track
%matplotlib inline


We also need to define functions to quickly display tracking results.

def draw_circles(xs, ys, rs, **kwargs):
for x,y,r in zip(xs,ys,rs):

def display_cuts(imf, centers, X=30, Y=25, Z=30):
"""Draw three orthogonal cuts with corresponding centers"""
plt.subplot(1,3,1);
draw_circles(centers[:,0], centers[:,1], centers[:,-2], facecolor='none', edgecolor='g')
plt.imshow(imf[Z], 'hot',vmin=0,vmax=255);
plt.subplot(1,3,2);
draw_circles(centers[:,0], centers[:,2], centers[:,-2], facecolor='none', edgecolor='g')
plt.imshow(imf[:,Y], 'hot',vmin=0,vmax=255);
plt.subplot(1,3,3);
draw_circles(centers[:,1], centers[:,2], centers[:,-2], facecolor='none', edgecolor='g')
plt.imshow(imf[:,:,X], 'hot',vmin=0,vmax=255);


## Raw confocal image¶

Load the test data, which is only a detail of a full 3D stack. Display different cuts.

Note that raw confocal images are heavily distorted in the $z$ direction. We actually have 4 particles in the image: two small on top of each other, one on the side and a slightly larger below and on the side.

imf = np.fromfile('../../../multiscale/test_input/Z_elong.raw', dtype=np.uint8).reshape((55,50,50))
plt.subplot(1,3,1).imshow(imf[30], 'hot',vmin=0,vmax=255);
plt.subplot(1,3,2).imshow(imf[:,25], 'hot',vmin=0,vmax=255);
plt.subplot(1,3,3).imshow(imf[:,:,30], 'hot',vmin=0,vmax=255);


Create a finder object of the same shape as the image. Seamlessly work with 3D images.

finder = track.MultiscaleBlobFinder(imf.shape, Octave0=False, nbOctaves=4)


Feed the image in the tracker with default tracking parameters. The output has four columns : $(x,y,z,r,i)$ where $r$ is a first approximation of the radius of the particle (see below) and $i$ is a measure of the brightness of the particle.

centers = finder(imf)
print centers.shape
print "smallest particle detected has a radius of %g px"%(centers[:,-2].min())
display_cuts(imf, centers)


(2, 5)
smallest particle detected has a radius of 4.40672 px


This is a pretty bad job. Only two particle detected correctly. The top and side small particles are not detected.

Let us try not to remove overlapping.

centers = finder(imf, removeOverlap=False)
print centers.shape
print "smallest particle detected has a radius of %g px"%(centers[:,-2].min())
display_cuts(imf, centers)


(4, 5)
smallest particle detected has a radius of 4.40672 px



Now we have all the particles.

We have two effects at play here that fool the overlap removal: - The radii guessed by the algorithm are too large. - The particles aligned along $z$ are localised too close together.

Both effects are due to the elongation in $z$.

## Simple deconvolution of a confocal image¶

The image $y$ acquired by a microscope can be expressed as

$y = x \star h + \epsilon,$

where $\star$ is the convolution operator, $x$ is the perfect image, $h$ is the point spread function (PSF) of the microscope and $\epsilon$ is the noise independent of both $x$ and $h$. The process of estimating $x$ from $y$ and some theoretical or measured expression of $h$ is called deconvolution. Deconvolution in the presence of noise is a difficult problem. Hopefully here we do not need to reconstruct the original image, but only our first Gaussian blurred version of it. Indeed, after a reasonable amount of blur in three dimensions, the noise can be neglected and we thus obtain:

$y_0 \approx G_0 \star h,$

or in Fourier space

$\mathcal{F}[y_0] = \mathcal{F}[G_0] \times \mathcal{F}[h].$

Once $\mathcal{F}[h]$ is known the deconvolution reduces to a simple division in Fourier space.

### Measuring the kernel¶

$\mathcal{F}[h]$ should be measured in an isotropic system and in the same conditions (magnification, number of pixels, indices of refraction, etc.) as the experiment of interest.

Since a 3D image large enough to be isotropic is too big to share in an example, I just use the image of interest. Don't do this at home.

imiso = np.fromfile('../../../multiscale/test_input/Z_elong.raw', dtype=np.uint8).reshape((55,50,50))
kernel = track.get_deconv_kernel(imiso)
plt.plot(kernel);


The kernel variable here is actually $1/\mathcal{F}[h]$

### Kernel test¶

What not to do: deconvolve the unblurred image.

imd = track.deconvolve(imf, kernel)
plt.subplot(1,3,1).imshow(imd[30], 'hot',vmin=0,vmax=255);
plt.subplot(1,3,2).imshow(imd[:,25], 'hot',vmin=0,vmax=255);
plt.subplot(1,3,3).imshow(imd[:,:,30], 'hot',vmin=0,vmax=255);


Noisy isn't it? Yet we have saturated the color scale between 0 and 255, otherwise artifacts would be even more apparent.

What the program will actually do: deconvolved the blurred image.

imb = gaussian_filter(imf.astype(float), 1.6)
plt.subplot(1,3,1).imshow(imb[30], 'hot',vmin=0,vmax=255);
plt.subplot(1,3,2).imshow(imb[:,25], 'hot',vmin=0,vmax=255);
plt.subplot(1,3,3).imshow(imb[:,:,30], 'hot',vmin=0,vmax=255);


imbd = track.deconvolve(imb, kernel)
plt.subplot(1,3,1).imshow(imbd[30], 'hot',vmin=0,vmax=255);
plt.subplot(1,3,2).imshow(imbd[:,25], 'hot',vmin=0,vmax=255);
plt.subplot(1,3,3).imshow(imbd[:,:,30], 'hot',vmin=0,vmax=255);


The way the noise is amplified by deconvolution is obvious when we compare the histograms of the 4 versions of the image.

for i, (l, im) in enumerate(zip(['raw', 'blurred', 'deconv', 'blur deconv'], [imf, imb, imd, imbd])):
plt.plot(
np.arange(-129,363,8),
np.histogram(im, np.arange(-129,371,8))[0]*10**i,
label=l+' (%d,%d)'%(im.min(), im.max()));
plt.xlim(-130,360);
plt.yscale('log');
plt.legend(loc='upper right');


Obviously negative values should be discarded, and will be discarded by the algorithm.

### Localisation using deconvolved image¶

Not removing the overlaping particles, we use the deconvolution kernel.

reload(track)
centers = finder(imf, removeOverlap=False, deconvKernel=kernel)
print centers.shape
print "smallest particle detected has a radius of %g px"%(centers[:,-2].min())
display_cuts(imf, centers)


(5, 5)
smallest particle detected has a radius of 3.79771 px


All particles are found and actually they are not overlaping. However we got a spurious detection, probably some noise that got amplified by the deconvolution. It is easily removed by a threshold in intensity.

print centers


[[ 16.31334384  44.07373885  21.50444345   3.79770755  -0.37019082]
[ 26.79860906  22.0435863   21.89833819   3.89324706 -15.95536745]
[ 26.92095001  21.83422163  32.72685299   3.85903411 -17.3271687 ]
[ 26.11610631  31.30829898  36.11323148   4.33867807 -21.06731591]
[ 31.77923381  26.78655827  44.05564076   4.03334069 -20.1763175 ]]


centers = centers[centers[:,-1]<-1]
display_cuts(imf, centers)


We can also display the results with respect to the blurred and deconvolved version of the image that we can find by introspection.

display_cuts(finder.octaves[1].layersG[0], centers)


## From scales to sizes¶

This is pretty much like in 2D.

In [17]:
from colloids.particles import get_bonds
bonds, dists = get_bonds(positions=centers[:,:-2], radii=centers[:,-2], maxdist=3.0)
brights1 = track.solve_intensities(s, bonds, dists, centers[:,-1])
radii1 = track.global_rescale_intensity(s, bonds, dists, brights1)
centers2 = np.copy(centers)
display_cuts(imf, centers2)


## Sunday, February 22, 2015

### Multiscale localisation in 2D: Howto

My multiscale tracking algorithm has been out for 3 years, published, on ArXiV. However, its availability at Sourceforge has been theoretical, since I never wrote a proper tutorial on how to use the code.

Today's post is an attempt is this direction. I will explain how to localise and size particles on a 2D image using the Python implementation of the algorithm. This is also one of my first attempt to use ipython notebooks and to export them on this blog. You can find the original notebook in the sourceforge repository (/trunc/python/colloids/notebooks/).

Prerequisites are the python code imported from the repository, in your PYTHONPATH. You will probably need a few other packages (python-bs4, libboost-all-dev, python-numexpr) and a working C++ compiler (GCC or MINGW on Windows). Of course, python 2.?, numpy, ipython are needed. If you don't know what ipython is, and you are interested in my code, then you should learn about it.

First, import necessary library to load and display images

import numpy as np
from matplotlib import pyplot as plt
from colloids import track
%matplotlib inline


We also need to define a function to quickly plot circles of different sizes on a figure

def draw_circles(xs, ys, rs, **kwargs):
for x,y,r in zip(xs,ys,rs):


Load a picture containing bright particles on a dark background and immediatly display this image in a figure

im = plt.imread('droplets.jpg')
plt.imshow(im, 'hot');
plt.show()


Create a finder object of the same shape as the image. This actually creates many sub-finder objects, one for each \Octave\. Octave 1 has the same shape as the image. Octave 2 is twice smaller. Octave 3 is four time smaller than Octave 1, etc. Optionally, you can create Octave 0 that is twice bigger than Octave 1, but it takes a lot of memory and tracking results are bad. So here we set Octave0=False. The maximal scale at which particles are looked for is set by nbOctaves.

finder = track.MultiscaleBlobFinder(im.shape, Octave0=False, nbOctaves=4)
finder?


Feed the image in the tracker with default tracking parameters already does a great job. the output has three columns : $(x,y,r,i)$ where $r$ is a first approximation of the radius of the particle (see below) and $i$ is a measure of the brightness of the particle.

The smallest particle that can be detected with the default parameter $k=1.6$ in 2D is about 3 pixels in radius. In 3D it would be 4 pixels in radius. To increase this smallest scale you can increase the parameter $k$. Decreasing $k$ below 1.2 is probably not a good idear since you will detect noise as particles. Redo your acquisition with higher resolution instead.

centers = finder(im, k=1.6)
print centers.shape
print "smallest particle detected has a radius of %g px"%(centers[:,-2].min())
draw_circles(centers[:,0], centers[:,1], centers[:,2], facecolor='none', edgecolor='g')
plt.imshow(im, 'hot');


(305, 4)
smallest particle detected has a radius of 0.00130704 px


However some small particles are missing, for example near the large particle at the center. Maybe it is because by default the MultiscaleFinder removes overlapping particles? Let us try to disable this removal.

centers = finder(im, removeOverlap=False)
draw_circles(centers[:,0], centers[:,1], centers[:,2], facecolor='none', edgecolor='g')
plt.imshow(im, 'hot');


Well, we gained only spurious detections.

We can also try to disable a filter that rejects local minima that are elongated in space.

centers = finder(im, maxedge=-1)
draw_circles(centers[:,0], centers[:,1], centers[:,2], facecolor='none', edgecolor='g')
plt.imshow(im, 'hot');


For sanity, it is probably better to filter out the particles with a radius smaller than 1 pixel.

centers = centers[centers[:,-2]>1]


## Introspection¶

The intermediate steps of the algorithm are accessible. For example one can access the octaves (lines in figure below) and their successively blured versions of the original image.

#get maximum intensity to set color bar
m = max([oc.layersG.max() for oc in finder.octaves[1:]])
for o, oc in enumerate(finder.octaves[1:]):
for l, lay in enumerate(oc.layersG):
a = plt.subplot(len(finder.octaves)-1,len(oc.layersG),len(oc.layersG)*o + l +1)
#hide ticks for clarity
a.axes.get_xaxis().set_visible(False)
a.axes.get_yaxis().set_visible(False)
plt.imshow(lay, 'hot', vmin=0, vmax=m);



One can also access the difference of Gaussians. Here relevant values are negative, so we inverse the colors and saturate at zero.

#get maximum intensity to set color bar
m = min([oc.layers.min() for oc in finder.octaves[1:]])
for o, oc in enumerate(finder.octaves[1:]):
for l, lay in enumerate(-oc.layers):
a = plt.subplot(len(finder.octaves)-1,len(oc.layers),len(oc.layers)*o + l +1)
#hide ticks for clarity
a.axes.get_xaxis().set_visible(False)
a.axes.get_yaxis().set_visible(False)
plt.imshow(lay, 'hot', vmin=0, vmax=-m);



Each OctaveFinder looks for local minima (very negative) in the difference of Gaussians space. Local minima in understood relative to space $(x,y)$ and scale $s$. The job of the MultiscaleFinder is to coordinate the action of the Octaves to return a coherent result.

## From scales to sizes¶

The multiscale algorithm finds local minima in space and scale $s$. How can we convert scales in actual sizes (particle radius $r$)? If the particles are far apart, their respective spots do not overlap, so we can use the formula

$R = s \sqrt{\frac{2d\ln \alpha}{1-\alpha^{-2}}}$

where $d$ is the dimensionality of the image (here 2) and $\alpha=2^{1/n}$ with $n$ the number of subdivisions in an octave (parameter nbLayers when creating the MultiscaleFinder).

MultiscaleFinder actually does this conversion when returning its result.

But what if particles are close together? Then their respective spots overlap and the scale in only indirectly related to the size. We need to take into account the distance between particles and their relative brightnesses. The first step is to revert to scales.

s = track.radius2sigma(centers[:,-2], dim=2)


Then we have to find the couples of particles whoes spots probably overlap. The submodule colloids.particles has a function to do this. On a large system, this is the most costly function.

In [13]:
from colloids.particles import get_bonds
bonds, dists = get_bonds(positions=centers[:,:-2], radii=centers[:,-2], maxdist=3.0)


Now we have all the ingredients to run the reconstruction.

Note: You need boost libraries installed on your system to proceed.

sudo apt-get install libboost-all-dev

First, we will obtain the brightness of each particle form the value of the Difference of Gaussians at the place and scale of every particle, e.g. the last column of centers.

In [14]:
brights1 = track.solve_intensities(s, bonds, dists, centers[:,-1])


Second, we use these brightnesses to obtain the new radii.

In [15]:
radii1 = track.global_rescale_intensity(s, bonds, dists, brights1)
plt.imshow(im, 'hot');


histR0, bins = np.histogram(centers[:,-2], bins=np.arange(30))
plt.step(bins[:-1], histR0, label='supposed dilute');
plt.step(bins[:-1], histR1, label='corrected for overlap');
plt.ylabel('count');
plt.legend(loc='upper right');


It is always possible to iterate first and second steps above.

In [17]:
brights2 = track.solve_intensities(s, bonds, dists, centers[:,-1], R0=radii1)
plt.imshow(im, 'hot');


In [18]:
histR2, bins = np.histogram(radii2, bins=np.arange(30))
plt.step(bins[:-1], histR0, label='supposed dilute');
plt.step(bins[:-1], histR1, label='corrected for overlap');
plt.step(bins[:-1], histR2, label='second iteration');
plt.ylabel('count');
plt.legend(loc='upper right');


But most of the times it does not help much.

## Thursday, October 9, 2014

### EnDirectDuLabo = RealScientists.fr

Two weeks ago I owned the Twitter account @EnDirectDuLabo. For those who know RealScientists, it is the French version. For those who don't, it is researchers taking turns to stir a Twitter account and tell what is the everyday life of a real scientist.

Somehow this is the same ideal I intended for this blog. However Twitter format makes it easier. In 140 characters you can't explain your research deep enough to bother your reader. You have to be clear with your idears, one at the time. Punchlines and soundbites.

If you read French, here is the storify of my week:

## Thursday, August 21, 2014

### Fracture in yogurt: publication process and press releases

On the research blog Meta Rabbit, publication process is described as a race when the distance to run is decided randomly partway through the race. Here is a typical example :

January 31st
Concluding more than a year of research, we submitted a paper to Physical Review Letters. Hopefully, the editor finds it interesting enough to send it to 2 referees (other anonymous researchers knowledgeable about our area of research).
April 1st
Answer of the referees. One is positive about our paper but the second one has some doubts and questions. We spend the next 3 weeks doing complementary experiments to answer those concerns.
April 23rd
We send our answer. It is processed by the editor and sent back to the referees
June 2nd
The referees answered again. We satisfied #2, but #1 is no more satisfied. Indeed, we did not answered him properly the first round, we were overconfident because he displayed a positive opinion.
June 7th
We send back a detailed response (11 pages when the paper is 4 pages). We must not overlook any detail now.
June 16th
We receive a mail from the editor: our paper is accepted!

In the next few weeks, the editorial team will format the paper, we will review and correct the proofs and finally the paper is published the 8th of July. But this is not the end: we have to advertise our paper. First, we contact the physics institute in CNRS (French National Centre for Scientific Research). Will they make a press release? A press release is important for advertising scientific results to the general public. Academic paper writing in the art of making boring rigorous something you are excited about. Press release in the reverse process. We describe our research with the less jargon possible to the scientific journalist in CNRS, he uses his writing skills to make it more appealing, we fix some factual mistakes (sexy does not mean wrong) ... and here we are, we have a press release (in French).

And then, we were lucky. One of us tweets his join to the acceptance, and a Philip Ball, a science journalist columnist at Nature Materials, gets it.

The tweet is a bit irreverent, but well, we are dealing with yoghurt, we have to get over it. And Philip Ball contacts us because he wants to write about our paper! So, again, back and forth for factual corrections and at the end he writes a beautiful column about our paper. Go read it, it's yummy.

## Thursday, May 15, 2014

### Strange patterns

For more than a year, I am trying to understand that

Or more dynamically, this

This is no leopard or fish skin pattern, nor salad dressing-like demixing. It appears when I make a yogurt in a very thin container, actually in a gap of a tenth of a millimetre thick, between two glass plates. Important detail: the glass is made extremely slippery, nothing appears with usually sticky glass.

The pattern is random but there is a well defined size. I can change the thickness of the cell (from 50 microns to one millimetre) and the pattern will scale up accordingly.

By the way, there is nothing living in my yogurt, no bacteria. The gelation is induced by slow and homogeneous acidification by chemicals. You can see the process of gel formation at high magnification in the video below. At first, things move around, this is Brownian motion. Then the gel forms and everything stops. Then the pattern appears.

It took me a couple of month to understand what these patterns are. I let you guess until next time.

## Sunday, April 27, 2014

### Testing rupture models

How do materials break ? This question is the starting point of my present post-doc in France. People have good ideas on how metals and crystals break, because their structures are known for a century or two. But when you turn your attention to more messy materials, like composites, concrete, paper, gels, etc. the answers are far less convincing.

Popular (because simple) models of fractures are called fibre bundle models. Imagine a cotton yarn, or a rope made of many interlocked fibres. If you pull on it hard enough and for long enough, a fibre will break. The pulling strength will be shared by all the other fibres, until an other one breaks. The more broken fibres, the less remaining fibres, the more often a new fibre breaks. Sounds like a random action movie scene, isn't it?

A first simple model is to consider that the fibres are independent from each other and each of them has a slightly different breaking threshold. The only way fibres interact with each other is by sharing evenly the load. This model is good at describing how paper breaks. It predicts that over a given load threshold, the material will always break. The stronger you pull, the faster it breaks, conversely if you pull just a minute amount above the threshold it will take a very long time to break. For most of the process, you may even not notice that you are damaging your rope. That's why in the movies it's always when the last guy crossing the rope bridge who falls.

A mode refined version of the model says that when a fibre breaks, the load it bear is split among its neighbouring fibres. It makes sense when you consider an actual rope, made of smaller twines, made of even smaller threads, etc. until you get to the individual fibres. What this new model predicts is the absence of threshold load. If you pull on your material long enough, it will break, whatever your strength. Of course, the weaker you are, the longer you'll have to wait. An other interesting prediction is the appearance of fractures, or regions where all the fibres are broken. Can this explain the fractures and cracks observed in many materials? That's on this question that I started my own research.

I selected a system that indeed looks like a network of fibres:

This is a gel obtained by slow acidification of a solution of milk protein (sodium caseinate), or in plain English: a yoghurt. The picture above was made by electron microscopy, and the white fibres you see are one hundred time thinner that a human hair. The black voids are occupied my water that flows throughout the network.

Unstirred yoghurt do behave very much like a hard solid, except that it is much weaker. You don't need a powerful machine to fracture it, a spoon is enough.

And once it is broken, the cracks won't heal. It is very different from a liquid or from pastes that you can stir to erase the memory of the system. If you stir a yoghurt you just destroy it further.

Now, I don't trust myself to apply a constant force with a spoon, especially if the experiment is set to run for a week. So I make my yoghurt between two concentric cylinders of a machine called a rheometer, which is able to rotate the inner cylinder with a constant force until the next power cut. Step by step: I pour the protein solution in the rheometer at rest, the gel forms during 17 hours as the solution acidifies, and only then I apply the force.

As you see on the picture above, after some time fractures appear in the yoghurt and grow vertically. By the way, the picture was taken by a standard webcam. This is not rocket science. The picture above shows only the bottom half of the yoghurt, there are fracture also growing from the top. When bottom and top fracture meet, the gel breaks completely.

I repeated this experiment many times, varying the applied load and thus the time needed to break the yoghurt completely. At high load, I am limited by the recording frequency of the rheometer, so my minimal breaking time is slightly less that a second. On low load, patience is the limiting factor, as well as evaporation and mold growth. Still I have an experiment that lasted 11 days. That is a factor 10 million between the longest and shortest experiments, enough to prove that there is no meaningful load threshold. That is a first win for the model.

### Edit March 29th, 2014

A video of the rupture. Nothing seems to append and then ...