September 29, 2020
Feature

Machine Learning Scientists Teach Computers to Read X-Ray Images

Researchers partner with international charity to improve orthopedic surgery outcomes

Artistic Hand X-Ray

PNNL researchers used machine learning to develop a tool to identify orthopedic implants in X-ray images. These implants are used to treat fractures of patients in developing countries without the need for real-time X-ray machines in the operating room.

Donald Jorgensen | PNNL

A busy street full of motorized vehicles in Cameroon
Increasing use of motorized vehicles, such as these in Cameroon, are the dominate cause of injury in low- and middle-income countries. Photo courtesy of Edouard TAMBA on Unsplash

If a person in the developing world severely fractures a limb, they face an impossible choice. An improperly healed fracture could mean a lifetime of pain, but lengthy healing time in traction or a bulky cast results in immediate financial hardship.

Pacific Northwest National Laboratory (PNNL) machine learning scientists leaped into action when they learned they could help a local charity whose treatments allow patients in the developing world to walk within one week of surgery—even when fractures are severe.

For more than 20 years, the Richland, Washington-based charity SIGN Fracture Care has pioneered orthopedic care, including training and innovatively designed implants that speed healing without real-time operating room X-ray machines. During those 20 years, they’ve built a database of 500,000 procedure images and outcomes that serves as a learning hub for doctors around the world. Now, PNNL’s machine learning scientists have developed computer vision tools to identify surgical implants in the images, making it easier to sort through the database and improve surgical outcomes.

Uniting worldwide medical data

Examples of some of the over 500,000 image in the SIGN database.
Examples of some of the over 500,000 images in the SIGN database. The images vary in quality and include a mixture of X-rays, surgery photographs, and other photos. While the database contains a wealth of information, the 20+ years of images do not consistently provide identifying information, such as the number of screws in an image, if a plate was used in the surgery, or if the image is from before or after an operation.

The partnership between PNNL and SIGN was born when data scientist Chitra Sivaraman struck up a conversation with a SIGN employee during a volunteer event. In her day job, Sivaraman and her team members have used machine learning to automatically identify clouds or assess the quality of sensor data, so she immediately understood how machine learning techniques could make quick work of understanding trends in the half million images in SIGN's database.

Sivaraman recruited a multidisciplinary team and applied for funding through Quickstarter, a PNNL program where staff vote to award internal funding to worthy projects that stretch beyond some of PNNL’s core capabilities.

"It was funded so fast, I wished I'd asked for more!" Sivaraman said. "I think my colleagues were excited by the opportunity for PNNL's machine learning scientists to use their image classification knowledge to solve a real-world problem for a great cause."

Computational chemist, Jenna Pope joined the team, followed by Edgar Ramirez, a Washington State University intern with aspirations to attend medical school. Together, they harnessed deep learning techniques to address the database’s biggest challenge: a huge variety of image types and quality.

Supervising the computer’s learning

Most of the time, when scientists develop deep learning techniques, they have a perfect set of images that are the same size and orientation. However, the size and scope of SIGN’s database included useful images that did not conform to a standard.

First, the team had to teach the computer to distinguish between photos of people and images of X-rays. This was tricky because in addition to multiple pictures of patients, busy doctors upload photos of X-rays without a standard orientation. Additionally, sometimes the pictures of X-rays were shot in a way that included distractions, such as the clinic in the background.

Lacking good initial examples, the team had to teach the computer to focus on the implants and not do things like mistake the fingers holding the X-ray image for one of the implant's screws.

Two X-ray images, one with an implant identified using machine learning.
The image on the right shows how the trained implant detection model correctly identified the nail and screws in an unmarked X-ray image.

Once the team had enough usable images, SIGN helped them identify implants using annotated images. The team trained the computer model to detect different implants by drawing bounding boxes around the parts of the implants in 300 images. 

It was painstaking work, but it paid off. Because the model learned what to look for in those 300 images, it could reliably identify the nails, screws, and plates in the individual implants from a larger selection of database images.

More applications for computer vision

Next, Sivaraman and her team would like to train their tool to recognize the image’s quality and automatically prompt a doctor to upload a usable image. Currently, SIGN’s founder, Dr. Zirkle, manually approves hundreds of images a day. Automating database image approval would free up time for him to focus on teaching or other tasks.

Pre-operation and post-operation X-rays
Doctors upload both pre-operation and post-operation X-rays to SIGN’s database, which means not all images display implants. PNNL’s current research runs post-operation X-rays through the hardware detection model to determine the hardware properties. Future work could study pre-operation X-rays to sort for bone type and fracture location, helping doctors to more quickly identify the procedures that lead to better outcomes.

The goal of orthopedic surgeons throughout the world is to enable fracture healing, and there are many variables when evaluating not only if a fracture has healed, but if the fracture will heal. Eventually, PNNL’s machine learning scientists could expand the tool to measure other, non-X-ray healing indicators or refine the tool to sort pre-operation X-rays by bone type and fracture location, helping doctors to more quickly identify the procedures that lead to better outcomes.

The work with SIGN’s database is part of PNNL’s expertise creating machine learning algorithms that can accurately classify large data collections using very few examples. This expertise includes computer vision techniques that look for cancer in diagnostic images or detect toxic pathogens in the soil, with many other potential applications for national security. This project demonstrates PNNL's technical capabilities in classifying X-ray images to support research in national security, materials science, and biomedical sciences.

The results of the collaboration between PNNL and SIGN are available online in the open source Journal of Medical Artificial Intelligence, a journal solely focused on ways to use machine learning in the medical field. To read about SIGN’s success stories, please go to https://signfracturecare.org/success-stories/.

###

About PNNL

Pacific Northwest National Laboratory draws on its distinguishing strengths in chemistry, Earth sciences, biology and data science to advance scientific knowledge and address challenges in sustainable energy and national security. Founded in 1965, PNNL is operated by Battelle for the Department of Energy’s Office of Science, which is the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science. For more information on PNNL, visit PNNL's News Center. Follow us on Twitter, Facebook, LinkedIn and Instagram.