A collection of my academic research, technical papers, and final thesis documentation.

Medical professionals use explainable machine learning systems to understand risk predictions in clinical AI while enabling better medical choices. The SHAP explanation system generates understandable feature importance metrics but researchers have not studied how these explanations change under data resampling. This study investigates feature attribution stability in maternal health risk prediction using a clinical dataset of 8,817 patient records. The evaluation process used three classification algorithms through 5-fold stratified cross-validation, developing a numerical Attribution Reliability Index (ARI) to measure explanation stability.

Scalp diseases like Psoriasis, alopecia areata, and seborrheic dermatitis are hard to diagnose because they look alike. This research employs deep ensemble learning models to detect acute scalp disease along with an explainable AI (XAI) system that produces natural language medical reports, assesses health risks, and categorizes scalp conditions. The model H-04 Ensemble (EfficientNet-B4 + ViT-B/16 fusion) attained an accuracy of 96.2%. The framework combines SHAP and Grad-CAM with a large language model (FLAN-T5) to produce descriptive clinical summaries.