AlzFormer: Video-based space-time attention model for early diagnosis of Alzheimer’s disease

dc.authorid0000-0001-7822-1549
dc.contributor.authorAkan, Taymaz
dc.contributor.authorAlp, Sait
dc.contributor.authorLedbetter, Christina Raye
dc.contributor.authorBhuiyan, Mohammad Alfrad Nobel
dc.date.accessioned2025-09-18T09:25:32Z
dc.date.available2025-09-18T09:25:32Z
dc.date.issued2025
dc.departmentFakülteler, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümü
dc.description.abstractEarly and accurate Alzheimer’s disease (AD) diagnosis is critical for effective intervention, but it is still challenging due to neurodegeneration’s slow and complex progression. Recent studies in brain imaging analysis have highlighted the crucial roles of deep learning techniques in computer-assisted interventions for diagnosing brain diseases. In this study, we propose AlzFormer, a novel deep learning framework based on a space–time attention mechanism, for multiclass classification of AD, MCI, and CN individuals using structural MRI scans. Unlike conventional deep learning models, we used spatiotemporal self-attention to model inter-slice continuity by treating T1-weighted MRI volumes as sequential inputs, where slices correspond to video frames. Our model was fine-tuned and evaluated using 1.5 T MRI scans from the ADNI dataset. To ensure the anatomical consistency of all the MRI data, All MRI volumes were pre-processed with skull stripping and spatial normalization to MNI space. AlzFormer achieved an overall accuracy of 94 % on the test set, with balanced class-wise F1-scores (AD: 0.94, MCI: 0.99, CN: 0.98) and a macro-average AUC of 0.98. We also utilized attention map analysis to identify clinically significant patterns, particularly emphasizing subcortical structures and medial temporal regions implicated in AD. These findings demonstrate the potential of transformer-based architectures for robust and interpretable classification of brain disorders using structural MRI
dc.identifier.citationTaymaz Akan, Akan, S., Sait Alp, Ledbetter, C. R., & Alfrad, M. (2025). AlzFormer: Video-based space-time attention model for early diagnosis of Alzheimer’s disease. Neuroscience, 585, 133–143. ‌
dc.identifier.doi10.1016/j.neuroscience.2025.08.062
dc.identifier.endpage143
dc.identifier.pmid40912354
dc.identifier.startpage133
dc.identifier.urihttps://hdl.handle.net/20.500.12941/319
dc.identifier.volume585
dc.indekslendigikaynakPubMed
dc.institutionauthorAkan, Sara
dc.institutionauthorid0000-0001-7822-1549
dc.language.isoen
dc.publisherElsevier
dc.relation.ispartofNeuroscience
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/openAccess
dc.subjectAlzheimer’s diseaseTimeSformerSpatiotemporalAttentionDeep learning
dc.subjectAlzheimer hastalığı TimeSformer Uzaysal-Zamansal Dikkat Derin öğrenme
dc.titleAlzFormer: Video-based space-time attention model for early diagnosis of Alzheimer’s disease
dc.title.alternativefor the Alzheimer’s Disease Neuroimaging Initiative
dc.typeArticle

Dosyalar

Orijinal paket
Listeleniyor 1 - 1 / 1
Küçük Resim Yok
İsim:
Sara Akan.pdf
Boyut:
13.63 MB
Biçim:
Adobe Portable Document Format
Lisans paketi
Listeleniyor 1 - 1 / 1
Küçük Resim Yok
İsim:
license.txt
Boyut:
1.17 KB
Biçim:
Item-specific license agreed upon to submission
Açıklama: