Automated Analysis of Multi-Modal Medical Data using Deep Belief Networks


Jacinto Nascimento, Gustavo Carneiro


The Automated Analysis of Multi-Modal Medical Data using Deep Belief Networks is part of several publications. Recently, magnetic resonance and ultrasound imaging have found utility as adjuncts to mammography in the detection and management of breast cancer. This project will develop novel machine learning techniques that optimally integrate information from each of these data sources so as to improve the efficiency and accuracy of breast cancer diagnosis.


We propose a novel training approach inspired by how radiologists are trained. In particular, we explore the use of meta-training that models a classifier based on a series of tasks. Tasks are selected using teacher-student curriculum learning, where each task consists of simple classification problems containing small training sets.

What is this project about?

This project aims to develop an improved breast cancer computer-aided diagnosis (CAD) system that incorporates text, mammography (MG), ultrasound (US) and magnetic resonance imaging (MRI), using deep belief networks (DBN). This system optimally integrates information from any combination of these data sources, so as to improve the efficiency, sensitivity and specificity of breast cancer diagnosis.


This CAD system, using multi-modal data and deep belief networks techniques, has the aims at:

  • Automatically detect and segment suspicious regions from different breast imaging data, e.g., MG, US and MRI;
  • Estimate BI-RADS scores (from 0 to 6) from the segmentation and the patient’s clinical records;
  • Retrieve similar cases from our database, given the above estimation results;
  • Automatically extract relevant features from any combination of input data types;