THE PARADIGM OF COGNITIVE COLLABORATION
BIM digitalization in AECO is evolving from simple automation to Augmented Intelligence. XAIBIM's purpose transcends repetitive tasks, focusing on an environment of augmented cognitive collaboration, where AI acts as a synergistic partner for professional transparent and reliable decision-making.
Problem: Semantic Fragmentation and Lack of Interoperability
The AECO sector suffers from data fragmentation and passive interoperability. XAIBIM Studio solves this by unifying knowledge into federated Knowledge Graphs (KG), transforming BIM models into lifecycle knowledge assets for automated and active decision-making.
Problem: Scarcity of High-Fidelity Data for AI
The lack of reliable datasets prevents the development of robust AI in construction. XAIBIM Studio addresses this by generating synthetic data assets that, once curated, form the CESAR (CESA Repository), the foundation for intelligent Digital Twins.
Problem: AI Opacity and Ineffective Normative Verification
'Black box' AIs are an unacceptable risk in construction. XAIBIM Nexus mitigates this risk with neuro-symbolic reasoning that enriches low-LOD models with GNNs and executes reliable and explainable deductive conformity verification.
Fundamental Doctoral Research Questions
Our research is structured around three thematic axes, each designed to address key challenges identified in the state of the art of AI applied to the AECO sector.
Axis 1: Curation and Expert Knowledge Foundations (XAIBIM Studio and CESAR)
This axis investigates the scientific methodology for creating an auditable, high-fidelity knowledge base, which is the fundamental prerequisite for any reliable AI system in the construction sector. The focus is on the curation process within XAIBIM Studio and the validation of the resulting CESAs.
- RQ1: What quantitative metrics (semantic consistency, information coverage) allow for validating the quality and auditability of a CESA (Curated, Explainable, Structured Asset) generated through the Human-in-the-Loop workflow of XAIBIM Studio?
- RQ2: How does the cryptographic integrity of the CESAR repository impact the robustness and bias mitigation of the XAIBIM Nexus engine trained exclusively on these assets?
- RQ3: What is the effectiveness of a semantic enrichment methodology in XAIBIM Studio to uniquely link cost information (5D) to BIM entities, ensuring a traceable Level of Information (LOI) in each CESA?
Axis 2: Engine Architecture and Validation (XAIBIM Nexus)
This axis focuses on the design, implementation, and empirical validation of the XAIBIM Nexus hybrid AI architecture. The research seeks to demonstrate the superiority of a neuro-symbolic approach for complex reasoning tasks in the AECO domain.
- RQ4: What is the performance (precision, F1-score) of the XAIBIM Nexus progressive neuro-symbolic pipeline (NLP→CNN→GNN→SWRL) in detecting constructability errors, compared to monolithic 'black box' AI models?
- RQ5: Can the GNN component of XAIBIM Nexus infer functional requirements implicit in low-LOD BIM models with validatable precision, semantically enriching CESAs to allow for early normative analysis?
- RQ6: How does the hybrid architecture of XAIBIM Nexus, which integrates contextual inference from GNNs with deductive reasoning from SWRL, allow for 4D/5D linking that is both automated and logically verifiable?
Axis 3: Measuring Explainability and Professional Impact (XAI)
This axis addresses the ultimate goal of the project: explainability (XAI). Research focuses on defining, measuring, and evaluating the impact of XAIBIM Nexus's transparent inferences on the trust and efficiency of construction professionals.
- RQ7: Which composite metrics model, combining source data traceability in the CESAR and clear logical inference chain in the Nexus, is most effective for quantifying the explainability of an AI recommendation in the AECO context?
- RQ8: How does an interface presenting XAIBIM Nexus inferences by visualizing relevant CESA sub-graphs and activated SWRL rules improve user trust and efficiency, compared to interfaces only showing final results?
- RQ9: What is the quantitative impact of the complete XAIBIM ecosystem (curation in Studio and analysis in Nexus) on reducing 4D/5D coordination errors, measured against traditional BIM workflows in a controlled case study?
Validation Results and Metrics
XAIBIM System KPIs (Key Performance Indicators)


A. Knowledge Quality KPIs: Inter-annotator Consistency (Fleiss' Kappa - κ)
The reliability of the CESAR knowledge base is validated by measuring consistency among experts during the curation process in XAIBIM Studio. We use the Fleiss' Kappa (κ) coefficient to quantify the degree of agreement, ensuring the CESA repository is robust and standardized [27], [28].
κ = (P̄ - P̄ₑ) / (1 - P̄ₑ)CESAR
This KPI is fundamental to answering the research question on the quantitative validation of CESA quality. It measures the reproducibility and reliability of the expert curation process (Human-in-the-Loop).
[28] M. N. Johnson et al., '"Closing the artificial intelligence skills gap in construction: competency insights from a systematic review,"' Res. Innov. Eng., 2025. DOI: 10.1016/j.rineng.2025.106406.

B. Inference Engine Performance KPIs: Semantic Precision (F1-Score)
The performance of the XAIBIM Nexus in classifying BIM components is measured by the F1-Score. This metric evaluates the balance between precision and recall, quantifying the neuro-symbolic pipeline's capacity for semantic enrichment [29] and extraction of correct information [30].
F1 = (2 * Precision * Recall) / (Precision + Recall)XAIBIM Nexus
Directly measures the effectiveness of the Nexus hybrid architecture in correctly identifying and classifying BIM entities, a fundamental capability for subsequent conformity verification.
[30] X. Y. Zhang et al., '"Automatic bridge inspection database construction through hybrid information extraction and large language models,"' Digit. Build. Eng., 2024. DOI: 10.1016/j.dibe.2024.100549.

B. Inference Engine Performance KPIs: Verification Precision (Accuracy)
The effectiveness of the XAIBIM Nexus symbolic reasoning layer is evaluated with Conformity Verification Precision (CVP). This KPI quantifies the SWRL rule engine's capacity to correctly identify both compliance and non-compliance with regulations [31], [32].
CVP = (TP + TN) / (TP + TN + FP + FN)XAIBIM Nexus (Symbolic Reasoning Layer)
Validates the reliability of the Nexus deductive reasoning component, crucial for the promise of auditable AI and for mitigating regulatory risk.
[32] G. H. Green et al., '"BIM ontology for information management (BIM-OIM),"' J. Build. Eng., 2025. DOI: 10.1016/j.jobe.2025.112762.

C. Explainability and Impact KPIs: Explainability Quantification (SHAP)
To ensure auditable AI, the explainability of each XAIBIM Nexus inference is quantified using Feature Contribution Values (SHAP). This metric is key to overcoming AI adoption barriers in risk management [33] and building trust [34].
ϕᵢ(f,x) = Σ [fₓ(S∪{i}) - fₓ(S)] / N!XAIBIM Nexus (XAI Layer)
Directly answers the research question on defining metrics for explainability. It allows for objective measurement of model transparency.
[34] E. F. Brown et al., '"Applications of Explainable Artificial Intelligence (XAI) and interpretable Artificial Intelligence (AI) in smart buildings...,"' J. Build. Eng., 2025. DOI: 10.1016/j.jobe.2025.112542.

C. Explainability and Impact KPIs: Workflow Efficiency (Speedup)
The tangible impact of the XAIBIM ecosystem is measured by Computational Speedup. This KPI quantifies the efficiency gain in workflows compared to traditional manual methods [35], [36].
Speedup = T_manual / T_XAIBIMEcosistema XAIBIM (Studio + Nexus)
Validates the practical value and return on investment of the methodology, demonstrating its applicability and benefit for construction professionals.
[36] K. Torres, et al., '"Perceptions of the Influence of BIM Digital Models in Cost Overrun Management...,"' KSCE J. Civ. Eng., 2025. DOI: 10.1016/j.kscej.2025.100413.

D. Exploratory KPI: Generative Design Quality (FID)
As a future research line, XAIBIM Nexus's capability to generate design solutions will be evaluated using the Fréchet Inception Distance (FID). This KPI measures the quality and diversity of generated designs [37], [38].
FID(x,g) = ||μₓ - μg||² + Tr(Σₓ + Σg - 2(ΣₓΣg)¹/²)XAIBIM Nexus (Future Generative Capability)
Explores the natural evolution of neuro-symbolic reasoning toward generative capabilities, one of the frontiers of AI in AECO design.
[38] Q. R. Taylor et al., '"Generative AIBIM: An automatic and intelligent structural design pipeline integrating BIM and generative AI,"' Inf. Fusion, 2024. DOI: 10.1016/j.inffus.2024.102654.
Discussion: AI Implications and Governance in AECO (Architecture, Engineering, Construction & Operations)
The XAIBIM Discussion consolidates research findings into three governance pillars, demonstrating the viability of an auditable and explainable knowledge ecosystem for Trustworthy AI in a high-risk sector like AECO.
“For BIM models to reach their full potential in asset management, it is imperative to bridge the gap between graphical information and computable data through semantic enrichment. This process transforms the model into a robust digital twin, ensuring information consistency from initial capture to long-term operational management.”
“The incorporation of AI models into the built environment must prioritize explainability (XAI) and risk assessment, especially in high-risk systems where error can have catastrophic consequences.”
Knowledge Governance: From Data to 'Project Memory'
The XAIBIM ecosystem establishes robust data governance. Through the expert curation process in XAIBIM Studio [41], CESAs are generated to form the CESAR, an immutable and auditable project memory that guarantees algorithmic reliability and the long-term integrity of the Digital Twin [42].
Risk Mitigation: The Hybrid Neuro-Symbolic Paradigm
The XAIBIM Nexus mitigates the professional risk inherent in 'black box' AI. Its neuro-symbolic reasoning architecture uses GNNs to understand the topological context [43] and a SWRL formal verification engine to apply normative logic in a deterministic and auditable manner [44].
Improved Decision-Making: From Inference to Quantifiable Confidence
XAIBIM improves decision-making by going beyond simple inference. The methodology allows for confidence quantification through explainability metrics (XAI) like SHAP [45], providing the professional with an auditable safety level that validates and enhances their expert judgment in 4D/5D analysis [46].
Institutional Trust & Standards
U.Porto (FEUP)
Academic Framework
Rooted in the Faculty of Engineering of the University of Porto, ensuring scientific rigor and academic excellence.
Open Standards
IFC & ISO 19650
Aligned with global buildingSMART standards for interoperability and international information management protocols.
Research Units
Digital Construction
Developed within specialized research environments focused on the digitalization of the AEC industry.
Principal Investigator
University of Porto (FEUP)
Academic researcher specialized in the integration of Building Information Modeling (BIM) with Neuro-symbolic Explainable AI (XAI). Developing frameworks for transparency, interoperability, and professional decision-support in the AEC industry.
