Thursday, November 24, 2016

Polymers in procession

Almost three years ago I became involved in an interdisciplinary project between physicists and chemists. The chemists were specialists in organic chemistry, that is to say, make complicated molecules mostly based on carbon. The physicists were specialists of the mechanics of soft materials, that is to say, how matter in between fluid and solid deform, break or flow.

Making these two groups of people work together was very difficult. Interdisciplinary science is hard. Scientists spend years or decades to understand enough a narrow field of knowledge to be able to make it progress. So when you put together scientists of different fields, they do not have the same vocabulary, the same methods, the same questions or the same expectations. When among physicists we were saying "We impose a constant shear stress" as a matter of fact, chemists were seeing us as
Physicist as seen by chemists
but we only meant "we apply a sideways constant force on the sample". And of course when chemists were telling us "This counter-ion is more chaotropic" we thought they were doing something like this:
Chemist as seen by physicists

To understand what the chemists meant I had to remember my chemistry classes like 15 years ago. Fortunately I quite liked chemistry in undergrads. I even defined myself as a chemical physicist, a much needed missing transmission belt. I was able to play the role of translator between the two groups. What the chemists meant was that the ion was disturbing a lot the water around itself.

At the end, we were able to work together. We made mechanical experiments on samples that were about a millimetre thick and we understood the results at the level of atoms. Conversely, we used this chemical understanding to design the mechanical properties of our samples. Let me explain to you what we did, in terms that both my physicist and chemist colleagues are able to understand.

All started with a new synthesis method used by the chemists. Polymers are long molecules where the same unit is repeated many times, a bit like a caterpillar. To make polymers the chemists started from a head and added about 70 body segments one by one. Then they added the chemical groups they wanted to each unit, like if they added legs to every body segment.


Here they managed to have a head with two negative charges and each body segments with one positive charge. In water the polymer schematically looks like that:

A polymer were most of the counterions are far away.
The minus charges floating around are counter ions. They are here to ensure that matter has a neutral charge in average. Some counter ions are floating far from the polymer, other are very close to the body segment they neutralise. We say that they are "condensed" on the polymer. The more condensed counter ions, the less the polymer looks charged from far away.

If you put that short linear polymer in pure water, it forms a very soft gel. Interestingly, you can inject it with a syringe. The gel is solid at rest, flows through the needle, and is solid again on the other side. Quite nice if you want to use a gel as a scaffold for cell growth in vivo. Nowadays, the gel is a soft solid that breaks irreversibly. You need a surgical procedure to put it in the body. With our type of gel a needle is enough.

Unfortunately for the applications, we found that our gel was very easily disturbed. If instead of pure water we used salty water the gel collapsed. If we started from a neutral head instead of the head bearing two minuses, no gel could form. Our mechanical measurements found no difference between the polymer solution and water alone. So we thought that the gel was formed due to charge interactions: minus head sticking to plus body. If no minus on the head, no gel. If salts, themselves charged, get in the way of the electrostatic attraction, no gel.

Also, something was quite strange: the gel was too soft. Like a thousand times too soft for such short polymers. A polymer gel is a 3D network whose edges are polymer chains and whose nodes are where chains meet, also called cross-links. The more meeting points you have, the harder the gel is. In other words, if you have short chains between cross-links, few body segments, the gel is hard. We measured the elasticity of the gel, and it was so soft that we predicted something like 60 000 body segments between meeting points! That is enough to make 880 of our short polymers!

Pine processionary caterpillars (source)
As the caterpillars on the picture above, our short polymers go on a single file.


So, we physicists were like "wow! that's strange" while the chemists were not caring much. Actually, the chemists were playing with their synthesis method to change the legs of the caterpillar.

Two ways to change the legs of the caterpillars: shape of leg correspond to the nature of the cation, either Immidazolium (aromatic cycle) or Pyrrolidinum (all single bonds). The colour corresponds to the counterion: F-, Cl-, Br- or I-.
For some compositions, the polymer was basically insoluble. No way to make a gel with it. To understand that, we have to remember that polymers are usually not very happy in water. What allows them to dissolve is their charges. If a polymer carries a lot of same charges (here pluses) these charges will repel each other, the polymer will stretch and will accept a lot of contact with water. If a polymer carries little charges, it will just collapse on itself to minimize the contact with water. You may remember from above that the more counter ions are condensed, the less charges the polymer carry, the less soluble it is.

A polymer with complete counterion condensation. Probably insoluble.


And indeed, we observed insolubility for the three compositions where the interaction between the repeated cation and its counterion was the strongest.

On the opposite, when the repeated cation and its counterion interacted weakly, we observed very strong gels, indicating shorter processions. Actually for two compositions we observed gels exactly 880 times stronger than the original one, meaning that the processions were just a single chain long.

So far so good, but all of this was learned by probing gently the softness of the gels, at deformations so low that they were not flowing but behaving as solids. To understand what is going on at larger strains, we have to have a look at the internal structure of a procession.

The procession at different scales.
As we said before, the monomers hates being in contact with water, so their preferred shape for the procession is a sphere. However plus charges prefer to be as far away as possible, so their preferred shape for the procession is a rod. It append that water-hating monomers are stronger on the small scales and that estranged charges are stronger on the large scales. So there is a scale D were the two influences balance. If the procession is just long enough to coil into a sphere of diameter D, then charges do not complain too much. But if the procession is longer, the charges refuse to make a larger sphere, and instead the procession grows into a cylinder of diameter D.

This cylinder does no grow in a straight line indefinitely. On scales large enough, counterions screen the charges from each other and the procession winds its way away.

Two charges in the mood for fight screened from each other by counterions.
So when we pushed harder on the gel to make them flow we found two threshold deformations. The first threshold corresponds to when the large scale winding path of the processions becomes extended. On small scales the procession is still collapsed to avoid contact with water. To stretch it further more monomers has to come in contact with water, it cost much more energy and this is visible on the mechanical measurements. The second threshold corresponds to the breaking of head to body bonds and it's when the gel flows.

Each circle is a blob of momomers collapsed to avoid contact with water.


For all compositions, the first threshold deformation is very small, telling us that the processions are almost linear at rest. It implies that the amount of charge condensation is directly related to the softness of the gel. So we are able to estimate charge condensation that varies from 10 free counterions per polymers (lots of charges) to one free counterion every 9 polymers in the procession (very few charges).

By contrast the second threshold vary widely between compositions between 10% and 800%.  Some gels flow immediately, others need to be stretched height time their initial size before flowing. This indicates that the head to body bonds are incredibly strong. Usually in water ionic bonds are about 100 times weaker that chemical (covalent) bonds. For our lowest charge polymer we measure head to body bonds that are within 20% of the carbon-carbon bond!

Our explanation is that with few charges the procession collapse around the head-to-body bond to avoid water. So the environment just around the bond is not water, it's hydrophobic polymer. Such environment is like an oil, where charges are few but interact very strongly. Indeed in oils other people have measured ionic bonds that strong. Actually this strategy is used by life itself: proteins can have just one charge in the middle of a large hydrophobic patch. When two such proteins with opposite charges meet the two hydrophobic patched stick together, expelling water from the direct environment of the charged and the ionic bond become very strong. This forms a lock and key mechanism that helps for example our immune system to recognise and block pathogens.


Time to wrap up, thanks for reading down to here. At the end, we have done very standard mechanical measurements at the millimetre scale to extract informations at the chemical level. Our model tells us what to change to tune the mechanical properties of our gel by a factor thousand. Thanks to a referee, we also did a back of the envelope calculation to see what these gels would give in a physiological environment, and we have a good candidate to inject in a living body. If anybody is interested on the biology side...

Reference:

Srour H, et al. Ion pairing controls rheological properties of “processionary” polyelectrolyte hydrogels. Soft Matter. 2016. ArXiv 1611.07721.

Tuesday, February 9, 2016

Layered cake and floating crystals

Mille crepe. By Laitr Keiows - Own work, CC BY-SA 3.0,


The soil we stand on is like a mille crepe, a layered cake made by the slow deposition of solid matter on an ocean bottom, each era adding a layer of a different nature. The process that makes particles even slightly denser than water settle down is called sedimentation.

A particle is pulled down by gravity, slowed down by the viscosity of the solvent. It also gets kicked randomly by the atoms around. For a large and heavy particle like a canon ball this random motion is negligible and the particle sediment to the bottom. For a small and almost buoyant particle like a protein, this random motion dominates and the particle diffuses in any direction. In between we have the so called sedimentation-diffusion equilibrium. Particles settle down, but also diffuse up, and we observe that the concentration of particles changes depending on height. At the bottom we count more particles than at the top. This is what we call a density profile.

Equilibrium density profiles are a great tool for physicist. By measuring them, you can learn how your particles behave as a system. For example, if you observe a density that decreases exponentially with altitude, you known that the suspension behaves like an "ideal gas", which means that the particles almost do not interact. That's more or less the density profile of the gases in the atmosphere.

If you observe a sudden jump in a density profile, it means that you have an interface between two phases. For example between a gas of particles and a liquid of particles.

A colloidal gas-liquid interface. Picture by Paddy Royall.
If your particles are all the same size, you can even observe two consecutive jumps, from gas to liquid and then from liquid to crystal, where the particles are neatly aligned. Particles with different sizes would jumble the alignment. In general, it is quite difficult to make particles of different size crystallize.



There are several ways to get to this triple coexistence situation.  One possibility is that you first have the gas and the liquid that separate, and then the crystal forms from the liquid. A second possibility is crystals condensing from the liquid, settling down in sufficient quantities and only then does the liquid evaporates to form a gas layer on top. A third possibility is the crystals forming at the same time as gas bubbles, racing to the bottom or the top respectively. Only when gas and crystal layers sit on top of each other does some of the crystal melts to form a liquid layer in between.

My contribution was to add some more complexity to the first scenario. What if I add a few large particles (green) in the suspension of small particles (red) ?

At first, nothing changes: on top a gas that has almost no particle and on the bottom a mixture of many small and a few large particles.  If there was only small particles crystals would form at the bottom. But the large particles get in the way and no crystallisation occurs at the bottom.

Meanwhile the large particles settle faster than the small ones. So at the top of the liquid we soon have a layer devoid of large particles. Only small particles? Easy to make crystals then (big red blobs on the video below). Crystals are large, compact, and fall even faster than large particles. They outpace them and dive in the dense mixture of large and small particles. Splash!



And here we have something unexpected: the crystals float! I mean, yes, ice floats over water, we are accustomed to this. But water is an exception. Solid metal sink down into molten metal.


Actually we demonstrates that the mixture of small and large particles can get so dense without crystallizing that crystals made only of small particles can float in it.

The crystals are reasonably happy in there, not melting but not growing either. Since crystals are dropped continuously from the top, they end up filling pretty much the whole pool (where the large particles are) and even piling up over the level of the large particles.

Now the crystals that are over the level of the pool have no large particles to prevent their growth, so they grow and make a dense "ice pack" on top of the pool.

Final state of the limit between floating crystals (below) and the ice pack (above). This is the same place as the video above.


At the end, you get a pretty layered cake: gas on top, then a layer of liquid, then the ice pack, then the crystals made of small particles floating in the pool of large and small particles.

Details of the full layered sediment. Top: gas-liquid interface. Middle: ice pack. Bottom: crystals made of small particles in a small+large amorphous matrix.


Leocmach, M., Royall, C. P., & Tanaka, H. (2010). Novel zone formation due to interplay between sedimentation and phase ordering. EPL (Europhysics Letters), 89(3), 38006. doi:10.1209/0295-5075/89/38006
http://arxiv.org/abs/1402.0315

Thursday, December 3, 2015

Virtual lab notebook in IPython

I am using IPython a lot, either as a command prompt or in a Jupyter notebook. This is great to analyse data on the go. During such analysis you may generate some files, either some intermediate data, or figures.

Sometimes, weeks, months or years later you would like to remember exactly what you did to generate this file. This is important for science reproducibility, to check you followed the right method or just to reuse this handy bit of code.

When you have copied the code in a script file, easy. When you have organised properly your notebook and never delete the interesting cell, piece of cake. When it was yesterday and you can press the up key N times to look for the right set of lines in the history, painful but doable. But this no warranty, no rigorous method. One day you will think that writing the script is useless, one day you will delete the wrong cell in your notebook, and months after you will have to come back to this analysis and the history will be gone.

It is really akin to the lab notebook, the one in which you note the date, the temperature, the sample name, the procedure you follow, the result of the measures and all the qualitative observations. Some of it seems to matter much at the moment, but if you forget to write it down, you will never be able to reproduce your experiment.

How to make IPython generate this virtual notebook for you?

You want a file per IPython session with all the commands you type. You can achieve this manually by typing at the begining of each session

%logstart mylogfile append


But of course, you will forget.

We have to make this automatic. This is made possible by the profile configuration of IPython. If you have never created a profile for IPython, type in a terminal

ipython profile create

This will generate a default profile in HOME/.ipython/profile_default

Edit the file HOME/.ipython/profile_default/ipython_config.py and look for the line
#c.InteractiveShell.logappend = ''

Instead, write the following lines
import os
from time import strftime
ldir = os.path.join(os.path.expanduser("~"),'.ipython')
filename = os.path.join(ldir, strftime('%Y-%m-%d_%H-%M')+".py")
notnew = os.path.exists(filename)
with open(filename,'a') as file_handle:
    if notnew:
        file_handle.write("# =================================")
    else:
        file_handle.write("#!/usr/bin/env python \n# %s.py \n"
                  "# IPython automatic logging file" %
                  strftime('%Y-%m-%d'))
    file_handle.write("# %s \n# =================================" %
              strftime('%H:%M'))

# Start logging to the given file in append mode. Use `logfile` to specify a log
# file to **overwrite** logs to.
c.InteractiveShell.logappend = filename

And here you are, each time you open a IPython session, either console or notebook, a file is created in HOME/.ipython. The name of this file contains the date and time of the opening of the session (format YYYY-MM-DD_hh-mm). Everything you type during this session will be written automatically in this file.

Tuesday, September 8, 2015

LaTeX to MS Word

The good news first: our paper on wrinkling yoghurt was accepted. I won't tell you where since it is under embargo. If you want to know what it means to have a paper accepted, and what sort of work in involved (disclaimer: comparable to the amount of work need for the research in itself), have a look at this:



But last Friday night, 11pm, we received a mail from the editorial office saying something like
Our typesetting department must have Word document files of your paper [...].
Please provide these files as Word documents as quickly as possible and within the next 8 hours to keep your paper on schedule for publication.
Well, except for the short notice and the looming weekend, there was a big problem. Our paper was not a Word file and the conversion is all but easy. Let me explain

Academic paper formatting


But there is another last step not described here: formatting. Of course, you can open any text editor (notepad, your webmail) and type words after words to write the text of your article. But then, you would miss:
  1. figures
  2. equations 
  3. cross-references
Figures are the graphs, the pictures and the drawings. You can insert them if you switch to any modern word processor, LibreOffice or Microsoft Word for example. Getting them beautiful and right is a work in itself.

Equations are a nightmare with Word. Just writing y≈αx7 makes you seek in 3-4 different menus. But you can do it. Markup languages, like the HTML of this page, makes it easier once you know the syntax. LaTeX is a markup language made for equations. The above equation just writes $y=\alpha x^7$. LaTeX syntax for equations has become a de facto standard, so other languages likes Markup, offer to write the math parts in LaTeX. Plugins in LibreOffice also do that.

Cross referencing means that in your text you can write "see Figure 3", then switch figures 3 and 4 and have "see Figure 4" written automatically in your text. In practice you do not write the figure number, but insert a reference that the program will convert into the figure number. You can do that with Word through one menu and a rather unhelpful dialogue box. In LaTeX it is just "Figure \ref{fig:velocitygraph}". Actually, in the final document, you can even get an hyperlink to the figure. Same with chapters, sections, equations.

What is great with LaTeX is that your bibliography can be generated the same way. You just insert "was discovered recently \cite{Leocmach2015}" and the paper Leocmach2015 gets inserted in your bibliography, formatted properly and consistently. In the final document you would get "was discovered recently [17]" with an hyperlink going to the 17th item in your bibliography. Of course, you have ways to do that with plugins in Word or LibreOffice.

LaTeX is also nice because you can specify what you mean and then let the program format it for you. For example, when I want to write "10 µm" what I mean is ten micro metre, not "one zero space greek letter mu m", so in LaTeX I write "\SI{10}{\micro\metre}" and it will generate a "10", followed by an unbreakable space (you don't want the number and the unit on different lines or pages), followed by a micro sign µ (different from the μ in some fonts) and a "m".

By the way, LaTeX is open source and free, no need for a licence. The "final document" is a PDF that anybody can read. Actually, until Friday night Editors and Referees of our paper had only seen and judged the PDF. Nobody was caring about formatting (even if it helps to have a clean looking paper to show rather than a messy Word file).

So LaTeX is made for academic paper writing, and heavily used in Math, Physics and other communities. It would be unthinkable that a journal specialised in Physics refuse LaTeX formatted paper. However for Biology, Word is the norm. It must be a pain for the typesetting departments who have to translate Word format into something more usable. Broad audience journals often accept both, but not the journal we submitted to.

Latex to Word conversion

I spent most of my Saturday thinking about a reliable and reusable way to convert my paper. This won't be the last time I am asked to provide a Word file. I received advices on Twitter, tried various solutions, all unsatisfactory, and at the end I settled to this method:
  1. Dumb down the LaTeX layout. I was using a two column layout with figures within the text, I switched to a single column layout with a figure per page at the end of the document.
  2. Add \usepackage{times} to your LaTeX preamble in order to use Word default font Times New Roman.
  3. Let LaTeX make the PDF. All the commands, custom packages, etc. are taken into account. Cross references and bibliography are also right.
  4. Convert PDF into Word. @fxcouder did it for me using Adobe. There are probably open source ways of doing it. Simple equations were preserved, but as soon as fractions were involved the format was messy.
  5. Clean the Word file. No messing up, you need a real Microsoft Word with a licence. Work in 97/2000 compatibility mode. You need to show formatting marks, track down and delete the section breaks to obtain a single flow of text. Fix also the line breaks and hyphenations for the paragraphs to be in one piece. All messy equations must be cleaned up, leaving only the numbering for the numbered ones.
  6. Re type the equations. I did this by hand in Word since I had few equations to retype. 
  7. Copy the whole text and paste in into the template provided by the editor.
Rather than retyping the equations, an other way would be to process the original tex document into an ODT (LibreOffice) document using pandoc.
pandoc -s article.tex -o article.odt
Pandoc is messing cross-references, citations and anything custom in your LaTeX code. Do not use it for the conversion of the main text. However it gets the equations right. Then from LibreOffice, you can export the equations and import them back into the Word document.


So, why worry? It takes you only half a day for a 15 pages paper instead of the 8 hours requested by the editorial office.

Wednesday, July 15, 2015

Food science

At the beginning of June, I had the pleasure to attend my first conference about food science: The 7th International Symposium on Food Rheology and Structure.

I love food, my family loves food. With my sisters and brother we talk about food almost each time we meet. My aunts and uncles do the same. We love making food for each others and for our guests. My wife has the same kind of sensibility.

I love food, I love to cook. I love to improvise a new dish. I have dozens of spices in my kitchen, and I try to master their use, alone or in combination. I became versed in some recipes, likes lasagne bolognese, but I take on new challenges as often as possible. I often knead pasta or udon, I made chocolate éclair when they were impossible to find in Tokyo.

I love food, I love to understand it. My scientist brain can't be turned off when I cook. Mayonnaise is an emulsion, flour a granular material. When I make white sauce, the size of the eddies generated by my spoon is a visual indication of the Reynolds number and thus the viscosity of the sauce undergoing a sol-gel transition. Boiling water is a thermostat at 100°C.

I love food, I study it. My first independent project as an undergrad was about mayonnaise. Then I learned that soft matter science was a thing and went up to the PhD studying rather inedible soft materials. Freshly arrived in France as a postdoc, my new boss asked me if I wanted to study "waxy crude oil" or yoghurt. Of course I chose the later. I studied it as a physicist during 3 years and now finally I was able to present my results to food scientists.

I love food, but I am not a food scientist. I am not trying to formulate a new yoghurt. I don't make the link between the mouth feel adjectives rated by a panel of trained consumers and mechanical measurements. I am more interested by the physics that it reachable through the study of food systems.

I love food, and it was a pleasure to meet the food science community. I discovered very interesting systems, I heard interesting questions being raised, I received nice feedbacks about my contribution (see below). I even met a reader of this blog, hello Dilek!



I love food, I will meet this community again. I have been invited to Journées Scientifiques sur le thème Matière Molle pour la Science des Aliments, a conference to unite the French food science community. It will be held in October 28-29 in Montpellier.




Tuesday, May 12, 2015

CNRS position

After months of efforts, project writing and presentation rehearsals, I was selected to be hired as a CNRS researcher!



This permanent position starts in October and will be located in Institut Lumière et Matière (Light and Matter Institute) lab in Lyon University. I am so relieved to obtain a stable research position! Postdoc life is not bad but it lacks stability.



I am looking forward to join the new lab, but in the meanwhile I have stuff to do, especially now that I can focus back on science.
  • get the paper about the wrinkling yoghurt accepted somewhere
  • finish a paper pending since I left Tokyo 2 years and a half ago
  • get rheological measurements done for the collaboration with next door chemists (can't talk about it, patent application pending)
  • finalise minute understanding of yoghurt fracture and synaeresis by new confocal experiments
Because after that I will have to start my new project...

Tuesday, March 3, 2015

Multiscale localisation in 3D and confocal images deconvolution: Howto



Following last week's impulse, here is a tutorial about the subtleties of multiscale particle localisation from confocal 3D images.


For reference, this has been published as

A novel particle tracking method with individual particle size measurement and its application to ordering in glassy hard sphere colloids.
Leocmach, M., & Tanaka, H.
Soft Matter (2013), 9, 1447–1457.
doi:10.1039/c2sm27107a
ArXiV
The corresponding code is available at Sourceforge, including the IPython notebook corresponding to the present tutorial.



Particle localisation in a confocal image


First, import necessary library to load and display images

import numpy as np
from scipy.ndimage import gaussian_filter
from matplotlib import pyplot as plt
from colloids import track
%matplotlib inline


We also need to define functions to quickly display tracking results.

def draw_circles(xs, ys, rs, **kwargs):
    for x,y,r in zip(xs,ys,rs):
        circle = plt.Circle((x,y), radius=r, **kwargs)
        plt.gca().add_patch(circle)
        
def display_cuts(imf, centers, X=30, Y=25, Z=30):
    """Draw three orthogonal cuts with corresponding centers"""
    plt.subplot(1,3,1);
    draw_circles(centers[:,0], centers[:,1], centers[:,-2], facecolor='none', edgecolor='g')
    plt.imshow(imf[Z], 'hot',vmin=0,vmax=255);
    plt.subplot(1,3,2);
    draw_circles(centers[:,0], centers[:,2], centers[:,-2], facecolor='none', edgecolor='g')
    plt.imshow(imf[:,Y], 'hot',vmin=0,vmax=255);
    plt.subplot(1,3,3);
    draw_circles(centers[:,1], centers[:,2], centers[:,-2], facecolor='none', edgecolor='g')
    plt.imshow(imf[:,:,X], 'hot',vmin=0,vmax=255);


Raw confocal image


Load the test data, which is only a detail of a full 3D stack. Display different cuts.

Note that raw confocal images are heavily distorted in the \(z\) direction. We actually have 4 particles in the image: two small on top of each other, one on the side and a slightly larger below and on the side.

imf = np.fromfile('../../../multiscale/test_input/Z_elong.raw', dtype=np.uint8).reshape((55,50,50))
plt.subplot(1,3,1).imshow(imf[30], 'hot',vmin=0,vmax=255);
plt.subplot(1,3,2).imshow(imf[:,25], 'hot',vmin=0,vmax=255);
plt.subplot(1,3,3).imshow(imf[:,:,30], 'hot',vmin=0,vmax=255);




Create a finder object of the same shape as the image. Seamlessly work with 3D images.

finder = track.MultiscaleBlobFinder(imf.shape, Octave0=False, nbOctaves=4)


Feed the image in the tracker with default tracking parameters. The output has four columns : \((x,y,z,r,i)\) where \(r\) is a first approximation of the radius of the particle (see below) and \(i\) is a measure of the brightness of the particle.

centers = finder(imf)
print centers.shape
print "smallest particle detected has a radius of %g px"%(centers[:,-2].min()) 
display_cuts(imf, centers)

(2, 5)
smallest particle detected has a radius of 4.40672 px



This is a pretty bad job. Only two particle detected correctly. The top and side small particles are not detected.

Let us try not to remove overlapping.

centers = finder(imf, removeOverlap=False)
print centers.shape
print "smallest particle detected has a radius of %g px"%(centers[:,-2].min()) 
display_cuts(imf, centers)

(4, 5)
smallest particle detected has a radius of 4.40672 px





Now we have all the particles.

We have two effects at play here that fool the overlap removal: - The radii guessed by the algorithm are too large. - The particles aligned along \(z\) are localised too close together.

Both effects are due to the elongation in \(z\).

Simple deconvolution of a confocal image


The image \(y\) acquired by a microscope can be expressed as

\(y = x \star h + \epsilon,\)

where \(\star\) is the convolution operator, \(x\) is the perfect image, \(h\) is the point spread function (PSF) of the microscope and \(\epsilon\) is the noise independent of both \(x\) and \(h\). The process of estimating \(x\) from \(y\) and some theoretical or measured expression of \(h\) is called deconvolution. Deconvolution in the presence of noise is a difficult problem. Hopefully here we do not need to reconstruct the original image, but only our first Gaussian blurred version of it. Indeed, after a reasonable amount of blur in three dimensions, the noise can be neglected and we thus obtain:

\(y_0 \approx G_0 \star h,\)

or in Fourier space

\(\mathcal{F}[y_0] = \mathcal{F}[G_0] \times \mathcal{F}[h].\)

Once \(\mathcal{F}[h]\) is known the deconvolution reduces to a simple division in Fourier space.

Measuring the kernel


\(\mathcal{F}[h]\) should be measured in an isotropic system and in the same conditions (magnification, number of pixels, indices of refraction, etc.) as the experiment of interest.

Since a 3D image large enough to be isotropic is too big to share in an example, I just use the image of interest. Don't do this at home.

imiso = np.fromfile('../../../multiscale/test_input/Z_elong.raw', dtype=np.uint8).reshape((55,50,50))
kernel = track.get_deconv_kernel(imiso)
plt.plot(kernel);




The kernel variable here is actually \(1/\mathcal{F}[h]\)

Kernel test


What not to do: deconvolve the unblurred image.

imd = track.deconvolve(imf, kernel)
plt.subplot(1,3,1).imshow(imd[30], 'hot',vmin=0,vmax=255);
plt.subplot(1,3,2).imshow(imd[:,25], 'hot',vmin=0,vmax=255);
plt.subplot(1,3,3).imshow(imd[:,:,30], 'hot',vmin=0,vmax=255);




Noisy isn't it? Yet we have saturated the color scale between 0 and 255, otherwise artifacts would be even more apparent.

What the program will actually do: deconvolved the blurred image.

imb = gaussian_filter(imf.astype(float), 1.6)
plt.subplot(1,3,1).imshow(imb[30], 'hot',vmin=0,vmax=255);
plt.subplot(1,3,2).imshow(imb[:,25], 'hot',vmin=0,vmax=255);
plt.subplot(1,3,3).imshow(imb[:,:,30], 'hot',vmin=0,vmax=255);




imbd = track.deconvolve(imb, kernel)
plt.subplot(1,3,1).imshow(imbd[30], 'hot',vmin=0,vmax=255);
plt.subplot(1,3,2).imshow(imbd[:,25], 'hot',vmin=0,vmax=255);
plt.subplot(1,3,3).imshow(imbd[:,:,30], 'hot',vmin=0,vmax=255);




The way the noise is amplified by deconvolution is obvious when we compare the histograms of the 4 versions of the image.

for i, (l, im) in enumerate(zip(['raw', 'blurred', 'deconv', 'blur deconv'], [imf, imb, imd, imbd])):
    plt.plot(
             np.arange(-129,363,8), 
             np.histogram(im, np.arange(-129,371,8))[0]*10**i, 
             label=l+' (%d,%d)'%(im.min(), im.max()));
plt.xlim(-130,360);
plt.yscale('log');
plt.legend(loc='upper right');




Obviously negative values should be discarded, and will be discarded by the algorithm.

Localisation using deconvolved image


Not removing the overlaping particles, we use the deconvolution kernel.

reload(track)
centers = finder(imf, removeOverlap=False, deconvKernel=kernel)
print centers.shape
print "smallest particle detected has a radius of %g px"%(centers[:,-2].min()) 
display_cuts(imf, centers)

(5, 5)
smallest particle detected has a radius of 3.79771 px




All particles are found and actually they are not overlaping. However we got a spurious detection, probably some noise that got amplified by the deconvolution. It is easily removed by a threshold in intensity.

print centers

[[ 16.31334384  44.07373885  21.50444345   3.79770755  -0.37019082]
 [ 26.79860906  22.0435863   21.89833819   3.89324706 -15.95536745]
 [ 26.92095001  21.83422163  32.72685299   3.85903411 -17.3271687 ]
 [ 26.11610631  31.30829898  36.11323148   4.33867807 -21.06731591]
 [ 31.77923381  26.78655827  44.05564076   4.03334069 -20.1763175 ]]


centers = centers[centers[:,-1]<-1]
display_cuts(imf, centers)




We can also display the results with respect to the blurred and deconvolved version of the image that we can find by introspection.

display_cuts(finder.octaves[1].layersG[0], centers)




From scales to sizes


This is pretty much like in 2D.

In [17]:
from colloids.particles import get_bonds
s = track.radius2sigma(centers[:,-2], dim=3)
bonds, dists = get_bonds(positions=centers[:,:-2], radii=centers[:,-2], maxdist=3.0)
brights1 = track.solve_intensities(s, bonds, dists, centers[:,-1])
radii1 = track.global_rescale_intensity(s, bonds, dists, brights1)
centers2 = np.copy(centers)
centers2[:,-2] = radii1
display_cuts(imf, centers2)