Session 6

Image processing and analysis
RecordingsQ&A

Q&A Session

Technique in Focus Q&A

Plenary Session Q&A

Chat Transcript

00:20:38 Renee Whan: HI Everyone, Don’t forgot to prost your questions here people.
00:28:34 John Lock: Hi Erik, great talk! Does your HCRM deep learning method get better with larger data sets? i.e. does it continue to improve such that it may outperform other models with sufficient data?
00:29:40 dasandrew: Great talk thanks Erik! Would it be possible to apply your deep learning framework to analyse single particle tracking of transcription factor behaviour in the nucleus? Have you already done this and what would be the best way forward for someone wanting to try it out?
00:29:49 Erik Meijering: Thanks John! Yes, probably. We definitely have to do more experiments. To be continued. 🙂
00:29:54 Greg Bass: @Erik: How do the neural reconstruction algorithms compare to Tiago’s SNT toolkit, in either methodology or performance?
00:31:49 ANDLEEB HANIF: Thanks Eric fir such a comprehensive talk..
00:31:59 ANDLEEB HANIF: for*
00:32:41 Erik Meijering: Thanks @dasandrew! We have not yet tried the framework specifically for tracking transcription factors, simply because we didn’t have such images. Would be interesting to try. The framework is pretty generic so it should work. 🙂
00:34:28 Erik Meijering: Hi Greg Bass, we have not yet had a chance to look into SNT, so I can’t comment at this point. But we’ll get there eventually. 🙂
00:37:46 Nela Durisic: Hi Erik, I used the Python code published in your Cell Report paper with Marloes Arts for single particle tracking. It was so sensitive to training parameters for deep learning part that I basically needed know the diffusion parameters in order to recover them. Would you have a more robust code to recommend?
00:37:52 Juan: Hi Erik, just found the NAS paper and *blank* GitHub repo. 😂 I’m excited about this method and am wondering when we can try it on our data.
00:38:23 Genevieve: @Somesh How long does prediction take for a single image?
00:40:14 Anna Trigos: Great work and talk Somesh!!!
00:41:43 Kathryn Hall: From Erik: Hi Nela, good question! I’m afraid I will need to pass it on to Marloes, who implemented that specific method. I can get you in touch with her if you like.
00:41:57 Erik Meijering: Hi Juan, thanks for your interest. The paper actually got accepted last week, so we haven’t had a chance yet to update the GitHub repository. Will be done soon!
00:42:47 Nina Tubau: Somesh, how generalisable is the method? Can it be used for any other pattern as long as it’s in WSIs?
00:42:50 Nela Durisic: Thanks, Marloes and my team had a few meetings already and that was the conclusion. She is a PhD student and busy with other things. Just thought it might be something better now
00:45:31 Erik Meijering: Nela, since Marloes graduated and moved on, we have not developed the method further. Have you spoken with my former postdoc Ihor Smal, who was also involved in that research? He might have another look at it with you.
00:46:35 Nela Durisic: ok, thanks Erik. Will contact Ihor
00:52:25 Thanushi Peiris: Somesh, did you also use a DL model for the localisation step? also what was your test/train split
00:53:32 Thanushi Peiris: also how did you identify the Bowman’s capsule?
00:54:36 Thanushi Peiris: sweet thanks!
00:54:54 Cindy Evelyn: Hi Sonja, thanks for the interesting talk. Have you considered checking to see if the CERLI knockdown still allow for pore formation to occur? Pore formation can be indicated through calcium flux study on live-imaging of invasion
00:57:11 Andrew Das: Hi Erik, where can we access the Bayesian and DL packages for applying to SPT data?
00:58:51 Neftali Flores Rodriguez: Great talk Sonja
01:16:58 Renee Whan: HI everyone don’t forgot to provide some questions for Anna
01:24:19 Aseem Kashyap: How were the ground truths generated for training UNets for plant cell segmentation ? Manual labelling ? Have you tried transfer learning for models trainined on one type of cell and made to make predictions on completely different cell types with minimal re-training data ?
01:24:38 Renee Whan: Fantastic talk Anna, for some of the less experienced amongst us, could you tell us about how you undergo the validation of your algorithms, and related what degree of accuracy would say is bare minimum?
01:27:33 Thanushi Peiris: Hi Anna, I’ve used the multicut workflow in Ilasik before and am interested by this automated attractive/repulsive edges classification you mention based on the nuclei. In Ilastik you have to manually specify these – do you have an accompanying package that we can use to automate that using our own “rules” for good edges?
01:32:08 Anna Kreshuk: Folks, I have no idea where this music is coming from, I swear it wasn’t on when I was recording
01:33:19 Anna Kreshuk: I hear it in the background of my recording now, but looks like it’s just me, good 🙂
01:34:58 John Lock: Can the superpixel analysis ultilise multiple image channels, i.e. finding regions with similar combinations of image intensities across channels? If so, is there a limit to how many, i.e. RGB, or an arbitrary number?
01:35:15 John Lock: Wonderful talk by the way, thanks so much!
01:38:01 Genevieve: Great talk, thanks Anna!
01:38:32 Nina Tubau: Really interesting talk Anna, thanks!
01:38:40 Ian Harper: No question, but just would like to acknowledge the FANTASTIC contribution of this cutting edge OPEN software…
01:38:43 Pamela Young: Amazing talk, session, and day! Thanks all!
01:38:49 Yingying Su: Thank you Anna!! Great talk!
01:41:14 Greg Bass: How would the algorithm handle multi-nucleated cells, like skeletal muscle? Would it over-segment despite no clear cell boundary between those nuclei?
01:43:14 Linda Dansereau: Great talk!
01:44:33 Kathryn Hall: @Ian Harper – well said! Indeed!
01:47:13 Kathryn Hall: Thank you Anna! Thank you Erik, Somesh and Sonja!
01:47:15 John Lock: Thanks Lachlan, Anna and all, what a great session!
01:47:27 Greg Bass: Thanks everyone!