Abstract Associating specific gene activity with functional locations in the brain results in a greater understanding of the role of the gene. To perform such an association for the over 20,000 genes in the mammalian genome, reliable automated methods that characterize the distribution of gene expression in relation to a standard anatomical model are required. In this paper, we propose a new automatic method that results in the segmentation of gene expression images into distinct anatomical regions in which the expression can be quantified and compared with other images. Our contribution is a novel hybrid atlas that utilizes a statistical shape model based on a subdivision mesh, texture differentiation at region boundaries, and features of anatomical landmarks to delineate boundaries of anatomical regions in gene expression images. This atlas, which provides a common coordinate system for internal brain data, was trained on 36 images manually annotated by neuroanatomists and tested on 64 images. Our framework has achieved a mean overlap ratio of up to 91 ยง 7% in this challenging dataset. This tool for large-scale annotation will help scientists interpret gene expression patterns more efficiently.
Revised: August 1, 2007 |
Published: May 1, 2007
Citation
Bello M., T. Ju, J.P. Carson, J. Warren, W. Chiu, and I. Kakadiaris. 2007.Learning-based Segmentation Framework for Tissue Images Containing Gene Expression Data.IEEE Transactions on Medical Imaging 26, no. 5:728-744.PNNL-SA-52381.doi:10.1109/TMI.2007.895462