Harnessing the power of large foundation AI models in bioimage analysis

Key information

Application close date
05 October 2023, 11:59 BST
Hours per week
36 (full time)
Application guidance
Posted 25 August 2023
Background texture taken from the lab imagery.

This sandwich placement will be based in the Electron Microscopy STP supervised by Lucy Collinson.

Project background and description

A wide variety of imaging methods are used to help researchers study biological processes in health and disease. Modern techniques can image cells and tissues in 3D at high resolution, producing huge amounts of volumetric data. Furthermore, complementary types of information can be obtained from different imaging modalities, such as volume electron microscopy (named as one of the seven technologies to watch in 2023 by Nature), fluorescence microscopy and x-ray microscopy, allowing researchers to piece together the structure and function of the building blocks of life across a range of scales all the way down to the nanometre scale.

Extracting information from these images has historically been a very laborious process, often requiring a large amount of manual interaction, but recent artificial intelligence (AI) techniques such as deep learning using convolutional neural networks have produced remarkable results in many imaging domains, e.g. finding the nuclear envelope in volume electron microscopy data [1]. However, such systems can still require large amounts of “ground truth” data for the training process, which are frequently obtained manually by experts. This training bottleneck has been a major impediment to the broader uptake of AI methods in bioimage analysis. Recently, the advent of large-scale “foundation models” like OpenAI’s Chat-GPT and Meta.AI’s “Segment Anything Model” [2] have revolutionised mainstream AI, dealing with text and images from sources like social media and photographs. Bringing these methods into the bioimaging domain could significantly reduce some of the current limitations on image analysis, allowing a wider range of data to be thoroughly analysed at unprecedented scales.

The project will investigate the application of modern large-scale foundation AI methods to the analysis of images acquired in the electron microscopy science technology platform (EM STP), which may include images from a variety of electron, light or x-ray microscopy modalities. This work will contribute to faster analysis of images for a wide range of research at the Crick, including cancer research, infection, immunity, ageing and neurodegeneration.  

Candidate background

The post holder should embody and demonstrate the Crick ethos and ways of working: bold, open and collegial. The candidate must be registered at a UK Higher Education Institution, studying in the UK and must have completed a minimum of two years’ undergraduate study in a relevant discipline, and on track to receive a final degree grade of 2:1 or 1. In addition, they should be able demonstrate the following experience and key competencies:

  • This project will suit students from a computer science, physical sciences, mathematics or similar background, with a working knowledge of programming in a common language (e.g. python, Java, C/C++). An interest or experience in AI, machine learning or deep learning would be beneficial.
  • Good knowledge in relevant scientific area(s)
  • Good written and spoken communication skills
  • Ability to work independently and also capable of interacting within a group

References

1.        Spiers, H., Songhurst, H., Nightingale, L., de Folter, J., Hutchings, R., Peddie, C.J., . . . Jones, M.L. (2021)

            Deep learning for automatic segmentation of the nuclear envelope in electron microscopy data, trained with volunteer segmentations.

            Traffic 22: 240-253. PubMed abstract

2.         Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., . . . Girshick, R. (2023)

            Preprint: Segment everything.

            Available at: arXiv. https://doi.org/10.48550/arXiv.2304.02643