Show simple item record

dc.contributor.advisorDhou, Salam
dc.contributor.authorAlshrbaji, Mohammad Nabeel Mohammad
dc.date.accessioned2023-02-28T09:25:27Z
dc.date.available2023-02-28T09:25:27Z
dc.date.issued2022-12
dc.identifier.other35.232-2022.46
dc.identifier.urihttp://hdl.handle.net/11073/25164
dc.descriptionA Master of Science thesis in Biomedical Engineering by Mohammad Nabeel Mohammad Alshrbaji entitled, “Fluoroscopic 3D Images Generation Using A GAN Method”, submitted in December 2022. Thesis advisor is Dr. Salam Dhou. Soft copy is available (Thesis, Completion Certificate, Approval Signatures, and AUS Archives Consent Form).en_US
dc.description.abstractRespiratory motion is a major source of error in radiation therapy for thorax and lung cancer. Image-Guided Radiation Therapy (IGRT) is the use of frequent imaging during radiation therapy to enhance the precision and accuracy of the treatment delivery. Cone Beam Computed Tomography (CBCT) is an imaging modality that is used in radiotherapy delivery rooms to account for respiratory motion uncertainties. One CBCT scan results in a set of projections (2D images), which is taken in a couple of minutes while the patient is normally breathing. The known procedure for utilizing these projections is to sort them into breathing phases and reconstruct a 3D image from each, which results in making 4D-CBCT images. These 4D-CBCT images are used to estimate fluoroscopic images right before the treatment delivery. The objective of this work is to generate 3D fluoroscopic images of the lungs from two orthogonal projections using a Generative Adversarial Network (GAN) method. GAN is a powerful deep learning-based technique that allows two independent models to develop and compete with each other, which are named the generator (G) and the discriminator (D). This method has the potential of generating a 3D volume image representing patient anatomy at the time of treatment delivery from any two orthogonal projections. Using multiple pairs of projections from multiple pairs of angles, the model can generate a 3D fluoroscopic image representing the respiratory motion while the patient is in the treatment position. The resulting generated 3D images give very good results in terms of generating quality images representing the same breathing phase as the orthogonal projections. The quantitative results include the measures of Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), which equal 26.01 and 0.572, respectively. Although the proposed method does not intend to replace the used motion model-based methods, its goal is to add niche applications in radiation therapy, such as tumor tracking and dose verification.en_US
dc.description.sponsorshipCollege of Engineeringen_US
dc.description.sponsorshipMultidisciplinary Programsen_US
dc.language.isoen_USen_US
dc.relation.ispartofseriesMaster of Science in Biomedical Engineering (MSBME)en_US
dc.subjectRadiotherapyen_US
dc.subjectCBCTen_US
dc.subjectCone Beam Computed Tomography (CBCT)en_US
dc.subjectGANen_US
dc.subjectGenerative Adversarial Network (GAN)en_US
dc.subject3D Fluoroscopyen_US
dc.subjectRespiratory Motionen_US
dc.titleFluoroscopic 3D Images Generation Using A GAN Methoden_US
dc.typeThesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record