Segmentation Methods for FIB-SEM Images of Li+ - Cathodes

Better segmentation means accurate results

Performance, reliability, and safety of lithium-ion batteries play an important role in many industrial and consumer-oriented applications. Detailed images of battery cathodes at the nano scale can be obtained through FIB-SEM devices. In these images, active material and binder are easily differentiated and even the composition of the grains can be investigated.

By their nature, FIB-SEM devices come with a disadvantage: the slice wise-obtained images show the current slice but also areas that are located behind pores if the scanned sample is not infiltrated before imaging. Infiltrating samples is often avoided because it is a time-consuming task and might even change the structure of the material.

For numerical simulations and geometrical analysis, accurate 3D images of the sample are required. Therefore, the segmentation must be able to correctly differentiate the phases and to assign an area of a slice precisely to the current foreground or to the background. Adding to this challenge, the foreground is often affected by curtaining artifacts created by the Ion Beam.

Another problem in the segmentation of FIB-SEM images arises from so-called curtain artifacts, which are caused by cutting the materials of different strength with the ion beam.

Classical segmentation methods (e.g., global thresholding or watershed-based methods) struggle to segment these kinds of images properly. New machine learning-based segmentation methods are the advanced way of solving these problems and here, we compare them.

What do these results mean for GeoDict users?

Modern segmentation methods bring many advantages for the analysis of FIB-SEM images. In this comparison of segmentation methods, we only employed image processing methods that can be carried out by non experts. The initial handicap of these methods is the need to manually label the training data. This step takes time and may introduce human error in the results.

Read more about GeoDict solutions for

Authors and application engineers

Andreas Grießer, M.Sc.

Senior Business Manager
for Image Processing and Image Analysis

Dr. Ilona Glatt

Senior Business Manager
for Batteries and Fuel Cells

Book appointment online

Dr. Christian Wagner

Team Leader IT &
Senior Visualization
Specialist

Robin White, PhD

Senior Technical Product Manager

Carl Zeiss Microscopy, LLC,
Business Sector Materials Science,
Dublin, CA, United States

Case 1: Segmentation using Boosted Tree

The first method we apply is the Boosted Tree-based segmentation. 

This method is very similar to the original trainable Weka segmentation [1]. For each pixel that has been labeled, a set of features is computed and fed into a classifier, in this case a Boosted Tree [2]. Once the classifier is trained, it can be applied to all voxels of the scan to obtain the complete segmentation.

The selection of the features that are used can be quite critical to obtain a good segmentation. Finding the correct features may take some time. 

The training of the Boosted Tree-based method is fast, but the whole process is dominated by the time required to compute the feature images. It requires less manually labeled data and no GPU is required.

In this case, the segmentation does perform well in segmenting the grain foregrounds from the pore space and the binder. The segmentation of the binder does not work yet to our complete satisfaction. In some areas, grains in the background are labeled as binder and part of the binder is labeled as pore.

The following modules were used

Case 2: Segmentation using 2D U-Net

The second method we tested is a deep learning-based segmentation using a 2D U-Net [3]. 

U-Nets have been applied widely for the task of image segmentation. The most critical part for a segmentation with a deep neural network is the creation of the training data. We chose to train the network with sparse label data. This method allows the user to freely select where to label areas in the 3D image, without requiring full slices to be labeled. 

Training the 2D U-Net takes significant more time then training the Boosted Tree classifier and requires a GPU to do the training on.

The 2D U-Net results for the grains are again very good. In some places, even better then the Boosted Tree results. Especially in places where heavy curtaining artifacts are present, the U-Net does perform better. For the binder phase, the results also look better compared to the Boosted Tree, but there are still some artifacts in the segmentation. In some places discontinuities between the individually labeled Z-slices occur. 

The following modules were used

Case 3: Segmentation using 3D U-Net

The second method we tested is a deep learning-based segmentation using a 2D U-Net [3]. 

U-Nets have been applied widely for the task of image segmentation. The most critical part for a segmentation with a deep neural network is the creation of the training data. We chose to train the network with sparse label data. This method allows the user to freely select where to label areas in the 3D image, without requiring full slices to be labeled. 

We did use mostly identical training data as in the 2D U-Net case and only added labels in consecutive slices in some areas (~5% more labels) to profit from the 3D context. The training times under these conditions are similar to the second case. 

The results for the grains are again very similar and no advantage can be observed. In the binder phase, the 3D U-Net does perform better. There are no discontinuities between slices due to the 3D context awareness. 

The following modules were used

Video tutorials on YouTube

Importing and processing CT scans (Part 1)

Please note that after activating the video, data will be transmitted to YouTube. 
More Information

Importing and processing CT scans (Part 2)

Please note that after activating the video, data will be transmitted to YouTube. 
More Information

Importing and processing CT scans (Part 3)

Please note that after activating the video, data will be transmitted to YouTube. 
More Information

Tutorials for download

References

[1] I. Arganda-Carreras, V. Kaynig, C. Rueden, K. W Eliceiri, J. Schindelin, A. Cardona, H Sebastian Seung: “Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification”, Bioinformatics, Vol. 33, Issue 15: 2424–2426, 2017 https://doi.org/10.1093/bioinformatics/btx180

[2] T. Chen and C. Guestrin: XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '16). Association for Computing Machinery, New York, NY, USA, 785–794, 2016 https://doi.org/10.1145/2939672.2939785 . 

[3] O. Ronneberger, P. Fischer, T. Brox: “U-Net: Convolutional Networks for Biomedical Image Segmentation” Medical Image Computing and Computer-Assisted Intervention (MICCAI), Springer, LNCS, Vol. 9351: 234--241, 2015, http://dx.doi.org/10.1007/978-3-319-24574-4_28 . 

[4] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, O. Ronneberger: “3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation” Medical Image Computing and Computer-Assisted Intervention (MICCAI), Springer, LNCS, Vol. 9901: 424-432, 2016, http://dx.doi.org/10.1007/978-3-319-46723-8_49 .

Acknowledgement

We thank our partners at ZEISS for providing the FIB-REM scan of the cathode.