The Center for Intelligent Imaging can help a researcher with an unresolved biomedical question with numerous paths to discovery. Artificial Intelligence (AI) models could be put to use deploying new imaging modalities and methods of acquisition and reconstruction. What does modality mean exactly? Fast, multi-scale quantitative imaging data can be generated and integrated with other multi-sensor metadata. Real-time image and trajectory analyses are another possibility, enabling adaptive imaging acquisition in real time. Integrated data can then be used to obtain the best solution set for the specific research question. Ultimately, the center is a biomedical data-driven ecosystem, putting open science and open software development at the forefront, and offering efficient methods for data acquisition, analysis, and visualization to researchers across disciplines and departments.
Featured Research Projects
Machine and Deep Learning Applied to MR Images to Characterize Degenerative Joint Disease
Valentina Pedoia, PhD, and Sharmila Majumdar, PhD, respectively an Assistant Professor and Professor in the Department of Radiology and Biomedical Imaging, are developing algorithms for advanced computer vision and deep learning for improving the usage of non-invasive imaging as diagnostic and prognostic tools of degenerative joint disease. Their lab has developed deep learning convolutional neural networks for musculoskeletal tissue segmentation, abnormality detection, and severity staging covering a diverse range of imaging modalities and diseases – including bone fractures, soft tissue degeneration, and sports injuries. This work has been supported by NIH R00 and R61 grants, as well as the Department’s ongoing partnership with GE Healthcare.
Fully Automatic segmentation and 3D modeling from
MRI images of different anatomies.
Multi modal/Multisource graph of a Knee OA population (A) Extracted network based on biomechanics and compositional MRI
variables showing the presence of three distinct sub-networks marked with dashed circles. (B) The same network is shown
colored by Kellgren and Lawrence (KL) grade (OA severity). The combined network of biomechanics and compositional MRI
showed differences in osteoarthritis severity between the subnetwork 1 (prevalence o blue nodes low KL grading) and
subnetwork 2 (prevalence of red nodes high KL grading). (C) the subjects in the progression cohort are located in both
subnetworks. However, In subnetwork 1 progressors are located in a specific region marked with a dashed circle.
Developing an AI System for Brain MRI Diagnoses
Andreas Rauschecker, MD, PhD, a clinical fellow in neuroradiology, is developing an AI system for probabilistic brain MRI diagnoses. This system computationally models a neuroradiologist’s process of image interpretation by using a convolutional neural network for detection of imaging abnormalities, image processing for quantitative descriptions of these abnormalities in terms of signal, location, and volumetric features, and a probabilistic integration of these derived features with clinical features in the form of Bayesian inference, ultimately arriving at a probability-ranked differential diagnosis. This set of methods has been applied to 50 unique neurological diseases, where the AI system has performed with promising results. This work was supported by an RSNA Resident Grant and by the NIH T-32 training grants at UCSF and at the University of Pennsylvania Radiology Departments.
Applying AI to the Imaging of Chronic Lower Pack Pain
Sharmila Majumdar, PhD, is leading a new technology development grant from the NIH, as part of the HEAL Initiative and BACPAC consortium, aimed at improving treatments for chronic pain and curbing the rates of opioid use. Dr. Majumdar and a highly-experienced multidisciplinary research team – including UCSF’s Valentina Pedoia, PhD, Cynthia Chin, MD, Duygu Tosun, PhD, Irina Strigo, PhD, and Harvard’s Mary Bouxsein, PhD – bring together extensive expertise in MR bioengineering, advanced MRI data analysis, radiology, neuroscience, neurosurgery, orthopedic surgery, and multi-dimensional analytics to respond to the criticial need for clarity in the etiologies of chronic back pain and other disorders of the spine. Long missing reliable methods for determing the appropriate course of patient care and objectively evaluations of the effectiveness of various interventions, their project is leveraging key technical advancements to address this important problem. They are developing machine learning-based, faster MR acquisition methods as well as machine learning for image segmentation and extraction of objective disease-related features from images. Furthermore, the team will develop, validate, and deploy end-to-end deep learning-based technologies for accelerated image reconstruction, tissue segmentation, and detection of spinal degeneration to facilitate automated, robust assessment of structure-function relationships between spine characteristics, neurocognitive pain response, and patient-reported outcomes.

Bottom: Voxel by ground analysis, mean relaxation T1ρ maps and p-values for significant differences between minimal/moderate (0-40) and severe disability (>40) groups.
Improving Detection of Prostate Cancer through Deep Learning
Kirti Magudia, MD, PhD, a clinical fellow in abdominal imaging, is developing and validating machine learning tools to predict clinically significant prostate cancer from prostate MRI. This work will primarily utilize T2WI, which is more robust with higher resolution, better signal to noise ratio, and more uniform acquisition across institutions compared to DWI and DCE. The models developed with this grant have the potential to reduce the rate of unnecessary prostate biopsies. This work has been funded by the RSNA R&E foundation and the Society of Abdominal Radiology.
Automatically Derived Volumetrics for Liver Transplants
Beck Olson, a Data Scientist in the Department of Radiology and Biomedical Imaging, is working with Jane Wang, MD, an expert in abdominal imaging, to develop neural networks to automatically derive volumetrics of the liver. UCSF performs approximately 200 liver transplants annually. Preparation for these surgeries historically has involved time consuming semi-automatic segmentation of the donor liver to determine whether there is sufficient organ volume to support positive outcomes for both donor and recipient. With full automation of this segmentation process on the horizon, Olson is now working with NVIDIA on a platform for delivering the results for clinical review.
Figure 2: Example Results. The green outline is the ground-truth segmentation of the right lobe and the green fill is the result from the trained segmentation model. The blue outline and fill are the corresponding regions for the left lobe.
Deep Learning Tools for Early Detection of Alzheimer's Disease
Jae Ho Sohn, MD, a Radiology clinical fellow, has combined Deep Learning techniques with brain imaging to discover changes in brain metabolism predictive of Alzheimer’s disease (AD). Previous research had demonstrated a link between patterns of glucose uptake in the brain and the disease process, but biomarker discovery was lacking. Collaborating with Benjamin Franc, MD, Dr. Sohn trained a DL algorithm using more than 2,100 18-F-fluorodeoxyglucose PET scans from 1,002 patients – a dataset originating from the Alzheimer’s Disease Neuroimaging Initiative. After testing on an independent set of 40 scans from 40 patients never studied, the algorithm was able to predict every case that advanced to AD. Dr. Sohn’s next steps could include a larger multi-institutional prospective study using the algorithm, as well as training the tool further to spot patterns associated with the accumulation of beta-amyloid and tau proteins – a known AD biomarker.

Saliency map of deep learning model Inception V3 on the classification of Alzheimer disease. (a) A representative saliency map with anatomic overlay in 77-year-old man. (b) Average saliency map over 10 percent of Alzheimer’s Disease Neuroimaging Initiative set. (c) Average saliency map over independent test set. The closer a pixel color is to the "High" end of the color bar in the image, the more influence it has on the prediction of Alzheimer disease.
Integrating Genomics and Imaging to Predict Pain Progression
An inter-disciplinary team led by Valentina Pedoia, PhD, including Sharmila Majumdar, PhD, Benjamin Glicksberg, PhD, and Atul Butte, MD, PhD, is developing a multi-modal framework integrating genomics, imaging, and clinical data to develop novel graph-based deep learning predictive models enabling the extraction of latent multi-domain signatures of unique clinical progression trajectories and prognose subject’s future incidence of joint pain. While integrating genomics with imaging and clinical information will shed light on the contribution of heritability in joint pain and contribute to the discovery of novel structural and functional joint imaging pain biomarkers, inconsistencies between joint abnormalities and patient experience may still be observed. To explain these inconsistences, the team has sought to additionally incorporate neuroimaging metrics of pain perception. A better understanding of these mechanisms could not only explain the large intra- and inter-subject variability of pain perception, but it may also be crucial to identifying different subtypes of progression that might require subtype-specific treatment regimens. This project has the potential to discover new functional and objective biomarkers for pain perception and future incidence and to identify the “brain-joint crosstalk” mediated by genetic factors, to assess common and differentiating imaging and clinical signatures of progression phenotypes previously identified, and to assess to what extent the patient-reported pain and pain progression subgroups could be identified by intrinsic functional brain connectivity features.
Other Research
Active Projects
- Discovery of AAA progression predictors - Dimitris Mitsouras, PhD
- MRI-Derived Risk Maps for Prediction of Prostate Cancer Progression - Susan Noworolski, PhD
- Automated Classification of Images for Longitudinal Analysis - Jason Crane, PhD
- Prostate Cancer Bone Metastases - Matthew Bucknor, MD
- Automated Prescription for MR imaging - Eugene Ozhinsky, PhD
- Precision Psychiatry for Adolescent Depression - Olga Tymofiyeva, PhD
- Transfer Learning Basis Networks for Radiology - John Mongan, MD, PhD
- Locus Coeruleus Histological Validate Neuroimage Atlas - Lea Grinberg, MD, PhD
- Deep Learning Based Diagnosis of Kidney Tumors - Peder Larson, PhD & Jane Wang, MD
- Machine Learning Improvements for Simultaneous PET/MR Systems - Peder Larson, PhD & Thomas Hope, MD
- Proteomic and Imaging Predictors of Cognitive Impairment in HIV - Jared Narvid, MD & Lynn Pulliam, PhD