Explainable AI (XAI): Why Transparency in AI Matters for Malaysia

Diagram illustrating explainable AI transparency for Malaysian organisations

Explainable AI (XAI): Why Transparency in AI Matters for Malaysia

What Is Explainable AI (XAI)?

Explainable AI (XAI) refers to techniques and methods that make the decisions and outputs of artificial intelligence systems understandable to humans. Unlike black-box models that produce results without justification, XAI enables users — whether doctors, bankers, regulators, or teachers — to understand why an AI reached a particular conclusion. In contexts where AI affects lives and livelihoods, XAI is not optional; it is essential.

Malaysia’s accelerating AI adoption across healthcare, finance, education, and public administration makes XAI one of the most strategically important technology concepts for Malaysian organisations and policymakers to understand in 2025 and beyond.

Why Do Black-Box AI Models Pose Risks for Malaysia?

Black-box AI systems produce accurate predictions without explaining their reasoning — and this opacity creates serious risks in high-stakes domains. When a Malaysian bank uses a machine learning model to reject a loan application, the applicant has no basis to challenge the decision. When a hospital’s AI triage system deprioritises a patient, clinicians cannot verify the logic. Opaque AI undermines accountability, erodes public trust, and may violate Malaysia’s Personal Data Protection Act (PDPA) provisions on fair and transparent processing.

Three sectors in Malaysia face the highest black-box AI risk:

  • Healthcare: AI diagnostic tools used in public hospitals need clinical interpretability so doctors can validate AI suggestions rather than blindly follow them
  • Financial services: Bank Negara Malaysia’s Risk Management in Technology (RMiT) framework emphasises explainability in algorithmic credit decisions
  • Government and public services: AI used in welfare eligibility, law enforcement analytics, and recruitment must be auditable under public accountability principles

Where Did XAI Come From? A Brief History

The formal XAI research programme was launched by the US Defense Advanced Research Projects Agency (DARPA) in 2016, with a mandate to develop AI systems that could explain their reasoning to human operators. The concern was practical: US military personnel were deploying AI in life-or-death decision contexts and needed to understand and trust those systems.

From DARPA’s programme emerged foundational techniques that are now widely deployed globally:

  • LIME (Local Interpretable Model-Agnostic Explanations): Explains individual predictions by fitting a simpler, interpretable model locally around a single data point
  • SHAP (SHapley Additive exPlanations): Uses game theory concepts to assign contribution scores to each input feature for a given prediction
  • Grad-CAM: Highlights regions of an image that activated a neural network’s classification — widely used in medical imaging
  • Counterfactual explanations: “If your income were RM 500 higher, the loan would have been approved” — actionable, human-readable outputs

How Does XAI Apply to Dr. Hariz’s Research at UPSI?

Dr. Muhamad Hariz Muhamad Adnan, Senior Lecturer and Acting Deputy Dean at UPSI’s Faculty of Computing and Meta-Technology, has dedicated a significant portion of his research programme to Explainable AI. His work examines XAI applications in education — specifically how AI systems that predict student performance or recommend learning interventions can be made interpretable enough for teachers and school administrators to act on responsibly.

His research contributes to a broader gap in the Malaysian literature: most local AI studies focus on prediction accuracy rather than interpretability. Dr. Hariz’s supervision areas at UPSI include:

  • XAI frameworks for educational data mining
  • Teacher trust and XAI dashboard design
  • Post-hoc explainability for student dropout prediction models
  • Ethical AI governance for Malaysian EdTech platforms

Postgraduate students interested in these areas can find more information at drhariz.com.

What Is the Difference Between Interpretable AI and Explainable AI?

These terms are often used interchangeably, but a useful technical distinction exists. Interpretable AI refers to models that are inherently transparent by design — decision trees, linear regression, and rule-based systems are interpretable because their logic is directly readable. Explainable AI refers to the broader category, which includes both inherently interpretable models and post-hoc techniques that generate explanations for complex black-box models after the fact.

Feature Interpretable AI Post-hoc XAI
Model type Decision tree, linear model, rule set Neural network, gradient boosting, ensemble
Explanation timing Built into the model Generated after prediction
Accuracy trade-off Often lower accuracy No accuracy loss
Common tools scikit-learn, Weka SHAP, LIME, Captum, InterpretML
Best for Regulated industries, audits Complex models needing user trust

What Are Malaysia’s AI Standards Saying About Explainability?

Malaysia’s MY-AI Standards framework, developed under MOSTI and aligned with ISO/IEC 42001 (AI Management Systems), identifies explainability as a core principle of responsible AI. Malaysian organisations deploying AI are expected to be able to provide explanations for AI-driven decisions upon request — particularly in consumer-facing financial products, healthcare diagnostics, and public sector services.

The MY-AI Standards align with the EU AI Act’s transparency requirements, reflecting Malaysia’s intent to maintain regulatory interoperability with global trading partners. Organisations that invest in XAI now are not just managing ethical risk — they are building future compliance capital.

Practical XAI Examples Malaysian Organisations Can Use Today

XAI is not purely academic. Here are concrete applications Malaysian organisations are implementing or should be exploring:

  • Credit scoring (banks and fintech): SHAP values surfaced to loan officers explain which applicant features drove a decision, enabling fair challenge procedures
  • HR recruitment: Explainable CV screening tools show recruiters which skills and qualifications drove candidate rankings, reducing unintentional bias
  • Medical imaging: Grad-CAM overlays on chest X-ray AI show radiologists exactly which tissue regions triggered a pneumonia or tumour classification
  • Agricultural yield prediction: Feature importance plots explain which soil parameters, weather data, or historical yields are driving AI forecasts — directly relevant to Malaysia’s precision agriculture initiatives
  • Student performance prediction: LIME-based explanations tell teachers which student behaviours (attendance, assignment submission, online activity) are flagging dropout risk

What Are the Limitations of Current XAI Techniques?

XAI is a powerful and rapidly evolving field, but Malaysian practitioners should be aware of its current limitations. Post-hoc explanations generated by LIME and SHAP are approximations — they describe model behaviour locally or globally but are not perfect representations of the model’s internal logic. Two different XAI methods applied to the same model can sometimes produce inconsistent explanations, which creates interpretive challenges for non-expert users.

Additionally, there is an inherent tension in XAI between the depth of explanation and the user’s capacity to act on it. A technically accurate SHAP explanation showing 15 contributing features is not useful to a loan officer who needs a one-sentence rationale. Effective XAI implementation in Malaysian organisations requires both good technical explainability tools and good user interface design that translates raw explainability outputs into actionable, role-appropriate communication. This human-centred dimension of XAI is a core focus of Dr. Muhamad Hariz Muhamad Adnan’s research at UPSI, where he examines how teachers and educational administrators interact with AI explanations in practice.

How Can Malaysian Professionals Learn XAI?

Accessible XAI learning pathways in Malaysia include structured corporate training programmes, university postgraduate modules, and self-directed online courses. Dr. Muhamad Hariz Muhamad Adnan, an HRD Corp Certified AI Trainer based in Malaysia, delivers XAI workshops tailored to corporate and public sector audiences — covering SHAP, LIME, and ethical AI governance in a practical, non-mathematical format accessible to decision-makers and data teams alike.

HRD Corp registered employers can claim training costs under the SBL-Khas scheme. For details on available programmes, visit drhariz.com or browse related articles on Dr. Hariz’s blog.

Frequently Asked Questions

Is explainable AI required by Malaysian law?

Currently, no single Malaysian law mandates XAI across all sectors. However, Bank Negara Malaysia’s RMiT framework, the PDPA’s fairness provisions, and the emerging MY-AI Standards all create implicit explainability obligations for organisations in financial services, healthcare, and public administration. Regulatory expectations are tightening, making proactive XAI adoption strategically important.

Does using XAI techniques reduce an AI model’s accuracy?

Post-hoc XAI techniques like SHAP and LIME do not modify the underlying model and therefore do not reduce its accuracy. They generate explanations separately, after predictions are made. Switching from a complex model to an inherently interpretable one may involve a modest accuracy trade-off, but for many real-world applications this trade-off is justified by the trust and compliance benefits.

What is the difference between XAI and responsible AI?

Responsible AI is the broader umbrella concept covering fairness, accountability, transparency, privacy, and safety in AI systems. XAI specifically addresses the transparency and accountability dimensions by making AI decisions understandable. Explainability is one component of responsible AI — necessary but not sufficient on its own to ensure a system is ethical or fair.

Can small Malaysian businesses benefit from XAI?

Yes. Even SMEs using off-the-shelf AI tools — credit risk platforms, marketing analytics, or customer churn prediction — benefit from understanding which factors are driving AI outputs. This understanding improves decision quality and helps business owners catch errors before they cause customer harm or regulatory issues. Simple tools like SHAP are freely available and require no specialist infrastructure.

Who is leading XAI research in Malaysia?

XAI research in Malaysia is growing across multiple universities. At UPSI, Dr. Muhamad Hariz Muhamad Adnan leads XAI research with a focus on educational applications and ethical AI governance. Other active groups include researchers at UTM, UKM, and UPM working on XAI for healthcare and financial applications. The field remains relatively uncrowded, offering strong publication opportunities.

Dr. Muhamad Hariz Muhamad Adnan is a Senior Lecturer and Acting Deputy Dean at Universiti Pendidikan Sultan Idris (UPSI), HRD Corp Certified AI Trainer, and digital transformation consultant. For AI training or postgraduate supervision enquiries, visit drhariz.com or read more on his blog.

Picture of Dr. Muhamad Hariz
Dr. Muhamad Hariz

He specializes in Artificial Intelligence (AI) Driven Digital Transformation in Education and Technopreneurship. He holds a Doctor of Philosophy (PhD) in Information Technology from Universiti Teknologi Petronas, a Master of Science (Computer Science) from Universiti Sains Malaysia, and a Bachelor of Computer Science from the same institution. He has supervised multiple postgraduate students and actively participates in research on AI applications in education and digital transformation. Email: mhariz@meta.upsi.edu.my

All Posts

Related Posts