{"id":7754,"date":"2026-04-13T09:00:00","date_gmt":"2026-04-13T01:00:00","guid":{"rendered":"https:\/\/drhariz.com\/blog\/explainable-ai-xai-malaysia\/"},"modified":"2026-05-03T17:49:39","modified_gmt":"2026-05-03T09:49:39","slug":"explainable-ai-xai-malaysia","status":"publish","type":"post","link":"https:\/\/drhariz.com\/blog\/explainable-ai-xai-malaysia\/","title":{"rendered":"Explainable AI (XAI): Why Transparency in AI Matters for Malaysia"},"content":{"rendered":"<h1>Explainable AI (XAI): Why Transparency in AI Matters for Malaysia<\/h1>\n<h2>What Is Explainable AI (XAI)?<\/h2>\n<p>Explainable AI (XAI) refers to techniques and methods that make the decisions and outputs of artificial intelligence systems understandable to humans. Unlike black-box models that produce results without justification, XAI enables users \u2014 whether doctors, bankers, regulators, or teachers \u2014 to understand why an AI reached a particular conclusion. In contexts where AI affects lives and livelihoods, XAI is not optional; it is essential.<\/p>\n<p>Malaysia&#8217;s accelerating AI adoption across healthcare, finance, education, and public administration makes XAI one of the most strategically important technology concepts for Malaysian organisations and policymakers to understand in 2025 and beyond.<\/p>\n<h2>Why Do Black-Box AI Models Pose Risks for Malaysia?<\/h2>\n<p>Black-box AI systems produce accurate predictions without explaining their reasoning \u2014 and this opacity creates serious risks in high-stakes domains. When a Malaysian bank uses a machine learning model to reject a loan application, the applicant has no basis to challenge the decision. When a hospital&#8217;s AI triage system deprioritises a patient, clinicians cannot verify the logic. Opaque AI undermines accountability, erodes public trust, and may violate Malaysia&#8217;s Personal Data Protection Act (PDPA) provisions on fair and transparent processing.<\/p>\n<p>Three sectors in Malaysia face the highest black-box AI risk:<\/p>\n<ul>\n<li><strong>Healthcare:<\/strong> AI diagnostic tools used in public hospitals need clinical interpretability so doctors can validate AI suggestions rather than blindly follow them<\/li>\n<li><strong>Financial services:<\/strong> Bank Negara Malaysia&#8217;s Risk Management in Technology (RMiT) framework emphasises explainability in algorithmic credit decisions<\/li>\n<li><strong>Government and public services:<\/strong> AI used in welfare eligibility, law enforcement analytics, and recruitment must be auditable under public accountability principles<\/li>\n<\/ul>\n<h2>Where Did XAI Come From? A Brief History<\/h2>\n<p>The formal XAI research programme was launched by the US Defense Advanced Research Projects Agency (DARPA) in 2016, with a mandate to develop AI systems that could explain their reasoning to human operators. The concern was practical: US military personnel were deploying AI in life-or-death decision contexts and needed to understand and trust those systems.<\/p>\n<p>From DARPA&#8217;s programme emerged foundational techniques that are now widely deployed globally:<\/p>\n<ul>\n<li><strong>LIME (Local Interpretable Model-Agnostic Explanations):<\/strong> Explains individual predictions by fitting a simpler, interpretable model locally around a single data point<\/li>\n<li><strong>SHAP (SHapley Additive exPlanations):<\/strong> Uses game theory concepts to assign contribution scores to each input feature for a given prediction<\/li>\n<li><strong>Grad-CAM:<\/strong> Highlights regions of an image that activated a neural network&#8217;s classification \u2014 widely used in medical imaging<\/li>\n<li><strong>Counterfactual explanations:<\/strong> &#8220;If your income were RM 500 higher, the loan would have been approved&#8221; \u2014 actionable, human-readable outputs<\/li>\n<\/ul>\n<h2>How Does XAI Apply to Dr. Hariz&#8217;s Research at UPSI?<\/h2>\n<p>Dr. Muhamad Hariz Muhamad Adnan, Senior Lecturer and Acting Deputy Dean at UPSI&#8217;s Faculty of Computing and Meta-Technology, has dedicated a significant portion of his research programme to Explainable AI. His work examines XAI applications in education \u2014 specifically how AI systems that predict student performance or recommend learning interventions can be made interpretable enough for teachers and school administrators to act on responsibly.<\/p>\n<p>His research contributes to a broader gap in the Malaysian literature: most local AI studies focus on prediction accuracy rather than interpretability. Dr. Hariz&#8217;s supervision areas at UPSI include:<\/p>\n<ul>\n<li>XAI frameworks for educational data mining<\/li>\n<li>Teacher trust and XAI dashboard design<\/li>\n<li>Post-hoc explainability for student dropout prediction models<\/li>\n<li>Ethical AI governance for Malaysian EdTech platforms<\/li>\n<\/ul>\n<p>Postgraduate students interested in these areas can find more information at <a href=\"https:\/\/drhariz.com\">drhariz.com<\/a>.<\/p>\n<h2>What Is the Difference Between Interpretable AI and Explainable AI?<\/h2>\n<p>These terms are often used interchangeably, but a useful technical distinction exists. <strong>Interpretable AI<\/strong> refers to models that are inherently transparent by design \u2014 decision trees, linear regression, and rule-based systems are interpretable because their logic is directly readable. <strong>Explainable AI<\/strong> refers to the broader category, which includes both inherently interpretable models and post-hoc techniques that generate explanations for complex black-box models after the fact.<\/p>\n<table>\n<thead>\n<tr>\n<th>Feature<\/th>\n<th>Interpretable AI<\/th>\n<th>Post-hoc XAI<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Model type<\/td>\n<td>Decision tree, linear model, rule set<\/td>\n<td>Neural network, gradient boosting, ensemble<\/td>\n<\/tr>\n<tr>\n<td>Explanation timing<\/td>\n<td>Built into the model<\/td>\n<td>Generated after prediction<\/td>\n<\/tr>\n<tr>\n<td>Accuracy trade-off<\/td>\n<td>Often lower accuracy<\/td>\n<td>No accuracy loss<\/td>\n<\/tr>\n<tr>\n<td>Common tools<\/td>\n<td>scikit-learn, Weka<\/td>\n<td>SHAP, LIME, Captum, InterpretML<\/td>\n<\/tr>\n<tr>\n<td>Best for<\/td>\n<td>Regulated industries, audits<\/td>\n<td>Complex models needing user trust<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>What Are Malaysia&#8217;s AI Standards Saying About Explainability?<\/h2>\n<p>Malaysia&#8217;s <strong>MY-AI Standards<\/strong> framework, developed under MOSTI and aligned with ISO\/IEC 42001 (AI Management Systems), identifies explainability as a core principle of responsible AI. Malaysian organisations deploying AI are expected to be able to provide explanations for AI-driven decisions upon request \u2014 particularly in consumer-facing financial products, healthcare diagnostics, and public sector services.<\/p>\n<p>The MY-AI Standards align with the EU AI Act&#8217;s transparency requirements, reflecting Malaysia&#8217;s intent to maintain regulatory interoperability with global trading partners. Organisations that invest in XAI now are not just managing ethical risk \u2014 they are building future compliance capital.<\/p>\n<h2>Practical XAI Examples Malaysian Organisations Can Use Today<\/h2>\n<p>XAI is not purely academic. Here are concrete applications Malaysian organisations are implementing or should be exploring:<\/p>\n<ul>\n<li><strong>Credit scoring (banks and fintech):<\/strong> SHAP values surfaced to loan officers explain which applicant features drove a decision, enabling fair challenge procedures<\/li>\n<li><strong>HR recruitment:<\/strong> Explainable CV screening tools show recruiters which skills and qualifications drove candidate rankings, reducing unintentional bias<\/li>\n<li><strong>Medical imaging:<\/strong> Grad-CAM overlays on chest X-ray AI show radiologists exactly which tissue regions triggered a pneumonia or tumour classification<\/li>\n<li><strong>Agricultural yield prediction:<\/strong> Feature importance plots explain which soil parameters, weather data, or historical yields are driving AI forecasts \u2014 directly relevant to Malaysia&#8217;s precision agriculture initiatives<\/li>\n<li><strong>Student performance prediction:<\/strong> LIME-based explanations tell teachers which student behaviours (attendance, assignment submission, online activity) are flagging dropout risk<\/li>\n<\/ul>\n<h2>What Are the Limitations of Current XAI Techniques?<\/h2>\n<p>XAI is a powerful and rapidly evolving field, but Malaysian practitioners should be aware of its current limitations. Post-hoc explanations generated by LIME and SHAP are approximations \u2014 they describe model behaviour locally or globally but are not perfect representations of the model&#8217;s internal logic. Two different XAI methods applied to the same model can sometimes produce inconsistent explanations, which creates interpretive challenges for non-expert users.<\/p>\n<p>Additionally, there is an inherent tension in XAI between the depth of explanation and the user&#8217;s capacity to act on it. A technically accurate SHAP explanation showing 15 contributing features is not useful to a loan officer who needs a one-sentence rationale. Effective XAI implementation in Malaysian organisations requires both good technical explainability tools and good user interface design that translates raw explainability outputs into actionable, role-appropriate communication. This human-centred dimension of XAI is a core focus of Dr. Muhamad Hariz Muhamad Adnan&#8217;s research at UPSI, where he examines how teachers and educational administrators interact with AI explanations in practice.<\/p>\n<h2>How Can Malaysian Professionals Learn XAI?<\/h2>\n<p>Accessible XAI learning pathways in Malaysia include structured corporate training programmes, university postgraduate modules, and self-directed online courses. <strong>Dr. Muhamad Hariz Muhamad Adnan<\/strong>, an HRD Corp Certified AI Trainer based in Malaysia, delivers XAI workshops tailored to corporate and public sector audiences \u2014 covering SHAP, LIME, and ethical AI governance in a practical, non-mathematical format accessible to decision-makers and data teams alike.<\/p>\n<p>HRD Corp registered employers can claim training costs under the SBL-Khas scheme. For details on available programmes, visit <a href=\"https:\/\/drhariz.com\">drhariz.com<\/a> or browse related articles on <a href=\"https:\/\/drhariz.com\/blog\">Dr. Hariz&#8217;s blog<\/a>.<\/p>\n<h2>Frequently Asked Questions<\/h2>\n<h3>Is explainable AI required by Malaysian law?<\/h3>\n<p>Currently, no single Malaysian law mandates XAI across all sectors. However, Bank Negara Malaysia&#8217;s RMiT framework, the PDPA&#8217;s fairness provisions, and the emerging MY-AI Standards all create implicit explainability obligations for organisations in financial services, healthcare, and public administration. Regulatory expectations are tightening, making proactive XAI adoption strategically important.<\/p>\n<h3>Does using XAI techniques reduce an AI model&#8217;s accuracy?<\/h3>\n<p>Post-hoc XAI techniques like SHAP and LIME do not modify the underlying model and therefore do not reduce its accuracy. They generate explanations separately, after predictions are made. Switching from a complex model to an inherently interpretable one may involve a modest accuracy trade-off, but for many real-world applications this trade-off is justified by the trust and compliance benefits.<\/p>\n<h3>What is the difference between XAI and responsible AI?<\/h3>\n<p>Responsible AI is the broader umbrella concept covering fairness, accountability, transparency, privacy, and safety in AI systems. XAI specifically addresses the transparency and accountability dimensions by making AI decisions understandable. Explainability is one component of responsible AI \u2014 necessary but not sufficient on its own to ensure a system is ethical or fair.<\/p>\n<h3>Can small Malaysian businesses benefit from XAI?<\/h3>\n<p>Yes. Even SMEs using off-the-shelf AI tools \u2014 credit risk platforms, marketing analytics, or customer churn prediction \u2014 benefit from understanding which factors are driving AI outputs. This understanding improves decision quality and helps business owners catch errors before they cause customer harm or regulatory issues. Simple tools like SHAP are freely available and require no specialist infrastructure.<\/p>\n<h3>Who is leading XAI research in Malaysia?<\/h3>\n<p>XAI research in Malaysia is growing across multiple universities. At UPSI, Dr. Muhamad Hariz Muhamad Adnan leads XAI research with a focus on educational applications and ethical AI governance. Other active groups include researchers at UTM, UKM, and UPM working on XAI for healthcare and financial applications. The field remains relatively uncrowded, offering strong publication opportunities.<\/p>\n<p><em>Dr. Muhamad Hariz Muhamad Adnan is a Senior Lecturer and Acting Deputy Dean at Universiti Pendidikan Sultan Idris (UPSI), HRD Corp Certified AI Trainer, and digital transformation consultant. For AI training or postgraduate supervision enquiries, visit <a href=\"https:\/\/drhariz.com\">drhariz.com<\/a> or <a href=\"https:\/\/drhariz.com\/blog\">read more on his blog<\/a>.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Explainable AI (XAI) is critical for building trust in AI systems. Learn why AI transparency matters for Malaysian organisations and how experts are advancing responsible AI.<\/p>\n","protected":false},"author":1,"featured_media":7753,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"iawp_total_views":0,"footnotes":""},"categories":[53],"tags":[],"class_list":["post-7754","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence-ai"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.9 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Explainable AI (XAI): Why Transparency in AI Matters for Malaysia<\/title>\n<meta name=\"description\" content=\"Explainable AI (XAI) is critical for building trust in AI systems. Learn why AI transparency matters for Malaysian organisations and how experts are advancing responsible AI. Explainable AI (XAI) is critical for building trust in AI systems. Learn why AI transparency matters for Malaysian organisations and how experts are advancing responsible AI.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/drhariz.com\/blog\/explainable-ai-xai-malaysia\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Explainable AI (XAI): Why Transparency in AI Matters for Malaysia\" \/>\n<meta property=\"og:description\" content=\"Explainable AI (XAI) is critical for building trust in AI systems. Learn why AI transparency matters for Malaysian organisations and how experts are advancing responsible AI. Explainable AI (XAI) is critical for building trust in AI systems. Learn why AI transparency matters for Malaysian organisations and how experts are advancing responsible AI.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/drhariz.com\/blog\/explainable-ai-xai-malaysia\/\" \/>\n<meta property=\"og:site_name\" content=\"Dr. Muhamad Hariz Adnan\" \/>\n<meta property=\"article:published_time\" content=\"2026-04-13T01:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-05-03T09:49:39+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/drhariz.com\/blog\/wp-content\/uploads\/2026\/05\/img06.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"675\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Dr Muhamad Hariz\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Dr Muhamad Hariz\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/drhariz.com\/blog\/explainable-ai-xai-malaysia\/\",\"url\":\"https:\/\/drhariz.com\/blog\/explainable-ai-xai-malaysia\/\",\"name\":\"Explainable AI (XAI): Why Transparency in AI Matters for Malaysia\",\"isPartOf\":{\"@id\":\"https:\/\/drhariz.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/drhariz.com\/blog\/explainable-ai-xai-malaysia\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/drhariz.com\/blog\/explainable-ai-xai-malaysia\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/drhariz.com\/blog\/wp-content\/uploads\/2026\/05\/img06.jpg\",\"datePublished\":\"2026-04-13T01:00:00+00:00\",\"dateModified\":\"2026-05-03T09:49:39+00:00\",\"author\":{\"@id\":\"https:\/\/drhariz.com\/blog\/#\/schema\/person\/681757f6490465d5c106cfee83e9eefc\"},\"description\":\"Explainable AI (XAI) is critical for building trust in AI systems. Learn why AI transparency matters for Malaysian organisations and how experts are advancing responsible AI. Explainable AI (XAI) is critical for building trust in AI systems. Learn why AI transparency matters for Malaysian organisations and how experts are advancing responsible AI.\",\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/drhariz.com\/blog\/explainable-ai-xai-malaysia\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/drhariz.com\/blog\/explainable-ai-xai-malaysia\/#primaryimage\",\"url\":\"https:\/\/drhariz.com\/blog\/wp-content\/uploads\/2026\/05\/img06.jpg\",\"contentUrl\":\"https:\/\/drhariz.com\/blog\/wp-content\/uploads\/2026\/05\/img06.jpg\",\"width\":1200,\"height\":675,\"caption\":\"blog.drhariz.com Stylised image of a brain made from white circuit lines and nodes, set against a blue background with colourful pixel and circuit patterns, representing artificial intelligence and technology. Dr. Muhamad Hariz Adnan\"},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/drhariz.com\/blog\/#website\",\"url\":\"https:\/\/drhariz.com\/blog\/\",\"name\":\"Dr. Muhamad Hariz Adnan\",\"description\":\"Certified AI Trainer Malaysia &amp; Digital Transformation Consultant\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/drhariz.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/drhariz.com\/blog\/#\/schema\/person\/681757f6490465d5c106cfee83e9eefc\",\"name\":\"Dr Muhamad Hariz\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/drhariz.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/6366747cf0faf531a369105da0a985d37e7a4daaca25253e8b592f345eeeb42b?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/6366747cf0faf531a369105da0a985d37e7a4daaca25253e8b592f345eeeb42b?s=96&d=mm&r=g\",\"caption\":\"Dr Muhamad Hariz\"},\"sameAs\":[\"https:\/\/drhariz.com\/blog\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Explainable AI (XAI): Why Transparency in AI Matters for Malaysia","description":"Explainable AI (XAI) is critical for building trust in AI systems. Learn why AI transparency matters for Malaysian organisations and how experts are advancing responsible AI. Explainable AI (XAI) is critical for building trust in AI systems. Learn why AI transparency matters for Malaysian organisations and how experts are advancing responsible AI.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/drhariz.com\/blog\/explainable-ai-xai-malaysia\/","og_locale":"en_US","og_type":"article","og_title":"Explainable AI (XAI): Why Transparency in AI Matters for Malaysia","og_description":"Explainable AI (XAI) is critical for building trust in AI systems. Learn why AI transparency matters for Malaysian organisations and how experts are advancing responsible AI. Explainable AI (XAI) is critical for building trust in AI systems. Learn why AI transparency matters for Malaysian organisations and how experts are advancing responsible AI.","og_url":"https:\/\/drhariz.com\/blog\/explainable-ai-xai-malaysia\/","og_site_name":"Dr. Muhamad Hariz Adnan","article_published_time":"2026-04-13T01:00:00+00:00","article_modified_time":"2026-05-03T09:49:39+00:00","og_image":[{"width":1200,"height":675,"url":"https:\/\/drhariz.com\/blog\/wp-content\/uploads\/2026\/05\/img06.jpg","type":"image\/jpeg"}],"author":"Dr Muhamad Hariz","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Dr Muhamad Hariz","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/drhariz.com\/blog\/explainable-ai-xai-malaysia\/","url":"https:\/\/drhariz.com\/blog\/explainable-ai-xai-malaysia\/","name":"Explainable AI (XAI): Why Transparency in AI Matters for Malaysia","isPartOf":{"@id":"https:\/\/drhariz.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/drhariz.com\/blog\/explainable-ai-xai-malaysia\/#primaryimage"},"image":{"@id":"https:\/\/drhariz.com\/blog\/explainable-ai-xai-malaysia\/#primaryimage"},"thumbnailUrl":"https:\/\/drhariz.com\/blog\/wp-content\/uploads\/2026\/05\/img06.jpg","datePublished":"2026-04-13T01:00:00+00:00","dateModified":"2026-05-03T09:49:39+00:00","author":{"@id":"https:\/\/drhariz.com\/blog\/#\/schema\/person\/681757f6490465d5c106cfee83e9eefc"},"description":"Explainable AI (XAI) is critical for building trust in AI systems. Learn why AI transparency matters for Malaysian organisations and how experts are advancing responsible AI. Explainable AI (XAI) is critical for building trust in AI systems. Learn why AI transparency matters for Malaysian organisations and how experts are advancing responsible AI.","inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/drhariz.com\/blog\/explainable-ai-xai-malaysia\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/drhariz.com\/blog\/explainable-ai-xai-malaysia\/#primaryimage","url":"https:\/\/drhariz.com\/blog\/wp-content\/uploads\/2026\/05\/img06.jpg","contentUrl":"https:\/\/drhariz.com\/blog\/wp-content\/uploads\/2026\/05\/img06.jpg","width":1200,"height":675,"caption":"blog.drhariz.com Stylised image of a brain made from white circuit lines and nodes, set against a blue background with colourful pixel and circuit patterns, representing artificial intelligence and technology. Dr. Muhamad Hariz Adnan"},{"@type":"WebSite","@id":"https:\/\/drhariz.com\/blog\/#website","url":"https:\/\/drhariz.com\/blog\/","name":"Dr. Muhamad Hariz Adnan","description":"Certified AI Trainer Malaysia &amp; Digital Transformation Consultant","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/drhariz.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/drhariz.com\/blog\/#\/schema\/person\/681757f6490465d5c106cfee83e9eefc","name":"Dr Muhamad Hariz","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/drhariz.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/6366747cf0faf531a369105da0a985d37e7a4daaca25253e8b592f345eeeb42b?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/6366747cf0faf531a369105da0a985d37e7a4daaca25253e8b592f345eeeb42b?s=96&d=mm&r=g","caption":"Dr Muhamad Hariz"},"sameAs":["https:\/\/drhariz.com\/blog"]}]}},"_links":{"self":[{"href":"https:\/\/drhariz.com\/blog\/wp-json\/wp\/v2\/posts\/7754","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/drhariz.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/drhariz.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/drhariz.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/drhariz.com\/blog\/wp-json\/wp\/v2\/comments?post=7754"}],"version-history":[{"count":1,"href":"https:\/\/drhariz.com\/blog\/wp-json\/wp\/v2\/posts\/7754\/revisions"}],"predecessor-version":[{"id":7773,"href":"https:\/\/drhariz.com\/blog\/wp-json\/wp\/v2\/posts\/7754\/revisions\/7773"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/drhariz.com\/blog\/wp-json\/wp\/v2\/media\/7753"}],"wp:attachment":[{"href":"https:\/\/drhariz.com\/blog\/wp-json\/wp\/v2\/media?parent=7754"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/drhariz.com\/blog\/wp-json\/wp\/v2\/categories?post=7754"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/drhariz.com\/blog\/wp-json\/wp\/v2\/tags?post=7754"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}