16.10 - 18.00 | Session 6: Image processing and analysis
Chairs: Lachlan Whitehead and Kathryn Hall
16.10 - 17.00 | Technique in Focus - Machine Learning
- Artificial Intelligence in Bioimage Analysis
Professor Erik Meijering, UNSW Sydney - Automated segmentation of glomeruli in mouse kidneys using machine learning
Dr Somesh Mehra, CSL Innovation, Melbourne - Super-resolved view of PfCERLI1, a rhoptry associated protein essential for Plasmodium falciparum merozoite invasion of erythrocytes
Dr Sonja Frolich, University of Adelaide
(1 x 20min + 2 x 10min + 10min Q&A)
17.00 - 18.00 | Plenary
Recordings
Q&A Session
Technique in Focus Q&A
Plenary Session Q&A
Chat Transcript
00:20:38 Renee Whan: HI Everyone, Don’t forgot to prost your questions here people.
00:28:34 John Lock: Hi Erik, great talk! Does your HCRM deep learning method get better with larger data sets? i.e. does it continue to improve such that it may outperform other models with sufficient data?
00:29:40 dasandrew: Great talk thanks Erik! Would it be possible to apply your deep learning framework to analyse single particle tracking of transcription factor behaviour in the nucleus? Have you already done this and what would be the best way forward for someone wanting to try it out?
00:29:49 Erik Meijering: Thanks John! Yes, probably. We definitely have to do more experiments. To be continued. 🙂
00:29:54 Greg Bass: @Erik: How do the neural reconstruction algorithms compare to Tiago’s SNT toolkit, in either methodology or performance?
00:31:49 ANDLEEB HANIF: Thanks Eric fir such a comprehensive talk..
00:31:59 ANDLEEB HANIF: for*
00:32:41 Erik Meijering: Thanks @dasandrew! We have not yet tried the framework specifically for tracking transcription factors, simply because we didn’t have such images. Would be interesting to try. The framework is pretty generic so it should work. 🙂
00:34:28 Erik Meijering: Hi Greg Bass, we have not yet had a chance to look into SNT, so I can’t comment at this point. But we’ll get there eventually. 🙂
00:37:46 Nela Durisic: Hi Erik, I used the Python code published in your Cell Report paper with Marloes Arts for single particle tracking. It was so sensitive to training parameters for deep learning part that I basically needed know the diffusion parameters in order to recover them. Would you have a more robust code to recommend?
00:37:52 Juan: Hi Erik, just found the NAS paper and *blank* GitHub repo. 😂 I’m excited about this method and am wondering when we can try it on our data.
00:38:23 Genevieve: @Somesh How long does prediction take for a single image?
00:40:14 Anna Trigos: Great work and talk Somesh!!!
00:41:43 Kathryn Hall: From Erik: Hi Nela, good question! I’m afraid I will need to pass it on to Marloes, who implemented that specific method. I can get you in touch with her if you like.
00:41:57 Erik Meijering: Hi Juan, thanks for your interest. The paper actually got accepted last week, so we haven’t had a chance yet to update the GitHub repository. Will be done soon!
00:42:47 Nina Tubau: Somesh, how generalisable is the method? Can it be used for any other pattern as long as it’s in WSIs?
00:42:50 Nela Durisic: Thanks, Marloes and my team had a few meetings already and that was the conclusion. She is a PhD student and busy with other things. Just thought it might be something better now
00:45:31 Erik Meijering: Nela, since Marloes graduated and moved on, we have not developed the method further. Have you spoken with my former postdoc Ihor Smal, who was also involved in that research? He might have another look at it with you.
00:46:35 Nela Durisic: ok, thanks Erik. Will contact Ihor
00:52:25 Thanushi Peiris: Somesh, did you also use a DL model for the localisation step? also what was your test/train split
00:53:32 Thanushi Peiris: also how did you identify the Bowman’s capsule?
00:54:36 Thanushi Peiris: sweet thanks!
00:54:54 Cindy Evelyn: Hi Sonja, thanks for the interesting talk. Have you considered checking to see if the CERLI knockdown still allow for pore formation to occur? Pore formation can be indicated through calcium flux study on live-imaging of invasion
00:57:11 Andrew Das: Hi Erik, where can we access the Bayesian and DL packages for applying to SPT data?
00:58:51 Neftali Flores Rodriguez: Great talk Sonja
01:16:58 Renee Whan: HI everyone don’t forgot to provide some questions for Anna
01:24:19 Aseem Kashyap: How were the ground truths generated for training UNets for plant cell segmentation ? Manual labelling ? Have you tried transfer learning for models trainined on one type of cell and made to make predictions on completely different cell types with minimal re-training data ?
01:24:38 Renee Whan: Fantastic talk Anna, for some of the less experienced amongst us, could you tell us about how you undergo the validation of your algorithms, and related what degree of accuracy would say is bare minimum?
01:27:33 Thanushi Peiris: Hi Anna, I’ve used the multicut workflow in Ilasik before and am interested by this automated attractive/repulsive edges classification you mention based on the nuclei. In Ilastik you have to manually specify these – do you have an accompanying package that we can use to automate that using our own “rules” for good edges?
01:32:08 Anna Kreshuk: Folks, I have no idea where this music is coming from, I swear it wasn’t on when I was recording
01:33:19 Anna Kreshuk: I hear it in the background of my recording now, but looks like it’s just me, good 🙂
01:34:58 John Lock: Can the superpixel analysis ultilise multiple image channels, i.e. finding regions with similar combinations of image intensities across channels? If so, is there a limit to how many, i.e. RGB, or an arbitrary number?
01:35:15 John Lock: Wonderful talk by the way, thanks so much!
01:38:01 Genevieve: Great talk, thanks Anna!
01:38:32 Nina Tubau: Really interesting talk Anna, thanks!
01:38:40 Ian Harper: No question, but just would like to acknowledge the FANTASTIC contribution of this cutting edge OPEN software…
01:38:43 Pamela Young: Amazing talk, session, and day! Thanks all!
01:38:49 Yingying Su: Thank you Anna!! Great talk!
01:41:14 Greg Bass: How would the algorithm handle multi-nucleated cells, like skeletal muscle? Would it over-segment despite no clear cell boundary between those nuclei?
01:43:14 Linda Dansereau: Great talk!
01:44:33 Kathryn Hall: @Ian Harper – well said! Indeed!
01:47:13 Kathryn Hall: Thank you Anna! Thank you Erik, Somesh and Sonja!
01:47:15 John Lock: Thanks Lachlan, Anna and all, what a great session!
01:47:27 Greg Bass: Thanks everyone!
Image Segmentation with Machine Learning
Dr Anna Kreshuk, EMBL Heidleberg
Super-resolved view of PfCERLI1, a rhoptry associated protein essential for Plasmodium falciparum merozoite invasion of erythrocytes
Dr Sonja Frolich, Postdoctoral Researcher, The University of Adelaide
Dr Frolich completed her PhD training in Molecular Parasitology in 2014 at the iThree Institute (i3) characterising the mechanisms of Eimeria maxima oocyst wall formation for interrupting parasitic disease of poultry, coccidiosis. In 2014, she was appointed as junior academic in the Climate Change Cluster (C3, UTS) to work on an ARC funded Linkage project grant with GE Health to establish protein expression platform in green algae Chlamydomonas. During this period, she co-supervised undergraduate students, and lectured to undergraduate and Masters students in Parasitology, Microbiology, and Microscopy and Flowcytometry. She is the recipient of several awards and travel grants, including Dean’s Academic Excellence Award (UTS) and Australian Technion Society Award. In March 2015, Dr Frolich has joined the Genome Integrity Unit at Children’s Medical Research Institute to develop automated microscopy-assisted high-content assays and optimise immunolabelling methods and applied state-of-the-art super-resolution imaging technologies (3D SIM, STED, STORM and Airyscan) to structural studies of telomeres for which she was awarded Research Excellence Award. In April 2016, Sonja joined the Research Centre of Infectious Diseases to work on an NHMRC funded project focusing on the biosynthesis of structural proteins in medically important parasite, Plasmodium falciparum.
Automated segmentation of glomeruli in mouse kidneys using machine learning
Dr Greg Bass, Senior Scientist, CSL Innovation
Co-authors
Mr Somesh Mehra, CSL Innovation
Dr Sandro Prato, CSL Innovation
Ms Yun Dai, CSL Innovation
Ms Amanda Turner, CSL Innovation
Dr Helen Cao, CSL Innovation
Ms Ana Maluenda, CSL Innovation
Dr Monther Alhamdoosh, CSL Innovation
Dr Greg Bass, CSL Innovation
Histopathological assessment of kidney tissue is commonly used in preclinical drug development, allowing scientists to monitor disease progression or assess the therapeutic potential of drug candidates in animal models. Kidney tissues contain hundreds of capillary tufts called glomeruli, which play an important role in the filtration of blood. Current analyses of glomeruli are generally low-throughput and labour-intensive, requiring manual segmentation of these small heterogeneous regions of interest from large whole-slide images (WSI). To accelerate the analysis of WSIs, we developed a machine learning-based pipeline in Python to automatically recognize and quantify glomeruli in H-PAS stained mouse kidney tissue. We used an ensemble of convolutional neural networks to locate glomeruli in a WSI, combined with a UNET to segment glomerular boundaries. We then applied various image analysis techniques to extract individual features of the glomeruli, enabling clinically relevant quantitative measures of kidney disease progression. We found that our algorithm is robust to the range of staining and structural variations that are expected across different batches or disease models. Finally, we utilized the open-source Streamlit framework to develop a user-friendly web application for life scientists to use. Our end-to-end solution enables scalable characterisation of glomeruli in WSIs, and consequently improves both the efficiency and statistical power of kidney tissue analyses widely used in the biopharmaceutical industry.
Artificial Intelligence in Bioimage Analysis
Professor Erik Meijering, UNSW Sydney
Advanced light microscopy imaging technologies are having an enormous impact on biomedical research, as they allow visualizing the structure and function of cells and even molecules with high sensitivity and specificity. The large data volumes generated in such studies require fully automated computational methods for accurate and reproducible quantitative analysis and interpretation of these data. To this end we develop advanced computer vision methods. Increasingly these methods are based on deep learning using artificial neural networks. This talk will highlight methods we have been developing specifically for cell and particle tracking and motion analysis.