Thesis Riku Hietanoro

Subject: Information System Science

Title: AI Readiness for Financial Forecasting

Abstract: 

This thesis investigates organizational AI readiness for adopting artificial intelligence in financial forecasting processes. Despite growing interest in AI-driven forecasting, organizations struggle to bridge the gap between technological aspirations and implementation capabilities, with only 26% successfully integrating AI at scale. This research addresses this critical gap by examining what factors characterize organizational AI readiness for the adoption of AI in financial forecasting processes.

The study builds upon existing theoretical frameworks, specifically the Technology–Organization–Environment (TOE) framework (Tornatzky & Fleischer, 1990) and Jöhnk et al.’s (2021) AI readiness model. Using qualitative methodology, semi-structured interviews were conducted with five finance professionals from diverse industries including software, social services, financial services, and business consulting. Data analysis employed Braun and Clarke’s thematic analysis approach to identify patterns and themes characterizing AI readiness.

The analysis revealed five main themes encompassing seventeen distinct readiness factors: (1) Technological Infrastructure and Data Readiness, (2) Human Skills and Cultural Attitudes, (3) Leadership and Strategic Alignment, (4) External Environment Constraints, and (5) Perceived Value and Fit of AI Solutions. Four novel factors emerged that extend existing frameworks: Data-Governance Maturity, Trust and Explainability Concerns, Proof-of-Concept & Value-Validation Capability, and Cross-Border Regulatory Alignment. The research also refined understanding of existing factors, revealing nuances such as generational divides in AI attitudes and the specific constraints of legacy spreadsheet-dependent systems.

The findings demonstrate that AI readiness for financial forecasting extends beyond technological preparedness to encompass human, organizational, and regulatory dimensions that existing frameworks only partially address. Financial forecasting’s unique characteristics—combining quantitative analysis with qualitative judgment under strict regulatory oversight—create distinct readiness requirements. The research provides actionable insights for organizations, emphasizing the need for strong data governance foundations, human-centric AI strategies that prioritize transparency, and sophisticated regulatory navigation capabilities. The study contributes to AI-readiness theory by showing how domain-specific requirements shape readiness in ways general technology-adoption frameworks cannot fully capture. It proposes a comprehensive conceptual framework for assessing and enhancing organizational preparedness for AI adoption in financial forecasting.

Key words:

AI readiness, financial forecasting, AI in finance, machine learning in finance, organizational readiness, artificial intelligence adoption, change management

 

Thesis Jonah Cabayé

Subject: Philanthropy & AI

Title: Enhancing Impact Measurement of Philanthropic Organisations: A Human-AI Collaboration Framework

Abstract: 

Philanthropic organisations increasingly face pressure to demonstrate the impact of their work, yet existing impact measurement practices remain fragmented, resource-intensive, and often ill-suited to capturing both qualitative and quantitative outcomes. This thesis addresses these challenges by proposing a human–AI collaboration framework designed to enhance the efficiency, traceability, and usefulness of impact data in the nonprofit sector. Building on principles of Design Science Research (DSR), the study integrates semantic technologies (ontology and knowledge graphs), natural language processing (NLP), and automation tools within a prototype system aimed at structuring and querying unstructured impact data.
The research is informed by a two-phase empirical process: initial exploratory interviews to identify key challenges and requirements, followed by evaluative interviews assessing the system’s perceived usefulness, usability, and ethical acceptability. The results confirm the relevance of established models such as the Technology Acceptance Model (TAM) and Human-Centered AI (HCAI) in this context, highlighting the importance of transparency, trust, and organisational fit. The proposed framework was found to effectively support common impact measurement needs, such as aggregating indicators, linking data to strategic goals like the SDGs, and making qualitative insights more analysable.
This work contributes both a functional prototype and a set of design recommendations for responsible AI implementation in the social sector. It also responds to documented gaps in the literature regarding integrated, context-sensitive AI tools for nonprofits. The findings underscore the potential of AI to support evidence-based decision-making in philanthropy, provided that technical innovations are embedded within participatory, ethical, and user-centred processes.

Key words: Impact Measurement, Philanthropy, Nonprofit Organisations, Human–AI Collaboration, Knowledge Graph, Ontology Engineering, Natural Language Processing (NLP), Technology Acceptance Model (TAM), Human-Centered AI (HCAI), Design Science Research (DSR), Responsible AI, Semantic Technologies, Sustainable Development Goals (SDGs)

Thesis Elsa Fox

Subject: Hybrid IT Environments

Title: Agile Adoption and its Impact on Inter-team Technical Coordination and Delivery Perception in Hybrid IT Organisations

Abstract: 

This study investigates how agile adoption influences inter-team technical coordination and stakeholder perception of delivery in hybrid IT organisations where agile and traditional methodologies coexist. Through a single case study within a multinational cosmetics company, seven semi-structured interviews were conducted with stakeholders across different coordination interfaces, using an integrated framework combining Thompson’s Interdependence Theory and Freeman’s Stakeholder Theory.
The findings reveal that hybrid IT environments develop sophisticated coordination mechanisms beyond traditional approaches: standardisation through documentation and quality standards, planning through release-based coordination and roadmapping, and mutual adjustment through over-communication strategies and small-group meetings. Emergent hybrid-specific mechanisms include Product Owner translation roles, branch-based integration strategies, and definition of done alignment processes.
Regarding stakeholder perception, timeline adherence emerges as the dominant success factor across all stakeholder groups, transcending methodological preferences. Stakeholders develop multi-criteria quality assessment frameworks while requiring transparency about progress and risk to maintain confidence in hybrid environments.
This research extends Thompson’s theory by identifying hybrid-specific coordination mechanisms and contributes to Stakeholder Theory by examining perception formation across multiple delivery methodologies. The findings provide practical guidance for coordination design and stakeholder management in hybrid IT
organisations.

Key words: Agile Delivery, Hybrid IT Environments, Technical Coordination, Delivery Perception,
Interdependence Theory, Stakeholder Theory

Thesis Inge van Dijk

Subject: Information Management

Title: Enhancing Risk Management in ERP Project through Structured RAID-Log Analysis: A Mixed-Methods Approach to Continuous Learning and Governance

Abstract: 

Introduction – This study explores how a structured analysis of RAID-logs can enhance risk management in ERP projects by supporting early risk detection, continuous learning, and as a result long-term organisational resilience.
Contribution – This study adopts a holistic perspective by combining quantitative and qualitative methods to address the underexplored long-term improvement of risk management practices in ERP implementations, shifting the focus from short-term mitigation to continuous learning through structured RAID-log analysis. It provides actionable insights for project managers by demonstrating how structured RAID-log analysis can improve early risk detection, support ongoing risk evaluation, and strengthen organisational resilience.
Methodology – This study employs an explanatory sequential mixed-methods design, combining quantitative analysis of RAID-log data with qualitative expert interviews to uncover patterns, validate findings, and provide a holistic understanding of how RAID-logs support risk management in ERP projects.
Results – The results reveal significant inconsistencies in how RAID-logs are used across ERP projects, with trends showing that effective RAID practices enable faster resolution, better risk response alignment, and offer potential for continuous learning when supported by standardized labelling and active monitoring.
Conclusions – This study has shown that RAID-logs contribute to a better understanding of risks and enhance their impact on project risk management by revealing escalation patterns between RAID elements, supporting proactive decision-making, and enabling continuous learning.
Further research – Future research should explore longitudinal studies, and the role of organisational culture, while expanding to large, multi-organisational datasets to better capture RAID-log dynamics and enhance their application through advanced methods like machine learning.

Key words: ERP implementation, risk management, continuous learning, RAID-log analysis, and process improvement.

Thesis W.M.N. van Dam

Subject: Data Localization Laws

Title: Navigating data localization: A Case Study of Signify’s Compliance with China’s PIPL and Transferable Lessons for India’s DPDP

Abstract: 

This thesis investigates how Signify, a multinational lighting corporation, operationalized compliance with China’s strict data localization laws under the Personal Information Protection Law (PIPL) and thereby identifies lessons for India’s emerging Digital Personal Protection (DPDP) Act. Through the use of a qualitative, abductive single case study with semi-structured interviews across Legal, IT and GRC (Governance, Risk and Compliance) departments, the study explores how Signify navigated through these complex regulatory landscapes.
Findings reveiled that while technical compliance was largely achieved due to earlier adaptations to the Chinese Great Firewall, the legal team led the compliance process by employing an external counsel and set standard contract to manage cross-border data transfers.
These processes have not been executed without any complexities. Key challenges emerged around regulatory ambiguity, risk-based decision making, and cross-functional communication gaps. The legal and IT department often struggled with their differences in business language, leading to different interpretations of the compliance requirements, often needing mediation from GRC or another external stakeholder.
This thesis proposes the adaptation of the McKinsey 7S model at a project level to adress these challenges. The McKinsey 7S model is expected to improve strategic alignment and foster a shared language accross departments. This approach thereby aims to transform compliance efforts from reactive siloed responses, into more proactive, structured initiatives which will align operational feasibility with legal obligations.
The findings emphasize that successful data localization compliance is not attributed to legal and technical requirements alone. Strong organizational coordination and clear communication structures are needed as well. By examining Signify’s experience, this study offers a blueprint for multinational companies facing similar regulatory challenges. Giving insights in how structured frameworks and risk-based compliance, through the use of the 7S model, navigate the evolving landscape of data localization laws effectively.

Key words: Data, Data Localization, Data Transfers, Cross-border, China, Framework, McKinsey 7S, model, qualitative study, abductive, case study, semi-structured interviews, Signify, Multinational company

Thesis Yliana Volmers

Subject: Information System Science

Title: Business Intelligence Adoption in Heritage Luxury Organizations: A TOE Framework Extension

Abstract: 

This thesis examines the adoption and appropriation of Business Intelligence (BI) tools within heritage luxury. While BI technologies are increasingly adopted across various industries, their deployment in luxury environments remains under-researched, particularly in situations where brand heritage, aesthetic coherence, and artisanal values are core strategic assets.
Drawing on the Technology–Organization–Environment (TOE) framework, the article conducts a qualitative case study in a large French luxury house. The empirical data was collected through six semi-structured interviews with employees involved in BI-related projects across different métiers, including retail operations, finance, and digital analytics. Thematic analysis reveals that although the TOE model captures important factors such as top management support, technical complexity, and external pressures, it overlooks crucial symbolic and cultural factors specific to luxury contexts.
The findings contribute by introducing sector-specific extensions to the TOE framework. In particular, the study identifies symbolic compatibility, aesthetic alignment, and post-adoption negotiation as significant mediating effects on BI appropriation. The research provides practical lessons for implementation teams and luxury organizations, including the inclusion of brand aesthetics in BI design, anticipate resistance linked to heritage protection, and build governance structures that balance standardization with creative autonomy.
While the scope lies within a single-case environment, the thesis offers analytical implications for future research and practice in digitalization in symbolically dense fields, with a need for more culturally aware frameworks in IS adoption research.

Key words: Business Intelligence, luxury industry, technology adoption, TOE framework, heritage brands, digital transformation

Thesis Diako Mazneh

Subject: Information System Science

Title: Selecting an Optimal Stream Processing Tool in an E-commerce Environment

Abstract: 

The rapid growth of data volume and velocity in e-commerce has heightened the demand for real-time
analytics and adaptive business strategies. Selecting an optimal stream processing tool is critical, yet
challenging, due to the wide array of available platforms and the complexity of requirements in modern
e-commerce environments. This thesis addresses the gap by applying a structured decision-making framework,
based on the Analytic Hierarchy Process (AHP), to guide e-commerce organizations in evaluating
and selecting stream processing tools aligned with their operational and strategic needs.
The research employs a multi-method case study within an European e-commerce company, combining
qualitative data from stakeholder interviews, documentation analysis, and observations, with quantitative
pairwise comparisons to establish and weight key selection criteria. Six stream processing platforms:
Apache Flink, Apache Spark Structured Streaming, Apache Kafka Streams, Apache Storm, Apache Samza,
and Google Cloud Dataflow are systematically evaluated against criteria such as fault tolerance, performance,
state and event handling, integration, operability, and cost within a dynamic pricing case study. The
findings demonstrate how a criteria-driven methodology can support organizations in making informed and
context-aware technology choices.

Key words: Stream processing, E-commerce, Real-time analytics, Analytic Hierarchy Process
(AHP), Tool selection, Dynamic pricing

Thesis Marine Philibert

Subject: Information System Science

Title: LLM-Powered Business Process Modelling in Small and Medium Enterprises: Benefits,
Success Factors and Implementation Challenges

Abstract: 

Small and Medium Enterprises (SMEs) face significant barriers in adopting traditional Business Process
Modelling (BPM) due to resource constraints, expertise requirements, and complex notation systems. Large
Language Models (LLMs) offer potential solutions by generating process models from natural language
descriptions, yet empirical evidence of their effectiveness in real SME contexts remains limited. This research
investigates the benefits, success factors, and failure factors when implementing LLM-powered BPM in SMEs.
Existing literature demonstrates clear BPM organizational benefits but identifies expertise requirements and
resource constraints as primary SME adoption barriers (Papademetriou & Karras, 2017; Viegas & Costa,
2022). Recent AI-powered BPM research shows technical feasibility for generating BPMN-compliant models
from textual descriptions (Grohs et al., 2023; Kourani et al., 2024) but lacks empirical investigation of
organizational adoption factors in real business contexts. The study employs the Technology-Organization-
Environment (TOE) framework to analyse adoption factors, combined with established BPM quality
assessment frameworks (SEQUAL for multi-dimensional quality evaluation, 7PMG for objective diagram
assessment) to create a comprehensive evaluation approach for AI-generated process models. A qualitative
multiple case study approach examines three French SMEs across different industries (IT consulting,
manufacturing, perfume production) with varying digital maturity levels. Using GPT-4 mini, BPMN 2.0
compliant process models were generated from organizational documentation and evaluated using the
established quality frameworks. Semi-structured interviews with key stakeholders captured organizational
perceptions, adoption challenges, and value recognition patterns. The technical assessment revealed consistent
strengths in activity labelling and gateway selection, alongside universal weaknesses including multiple
start/end event violations and excessive element proliferation. Stakeholder evaluation demonstrated a
fundamental dichotomy between communication effectiveness and operational completeness. While all
participants recognized value for external communication and training purposes, semantic gaps rendered
models insufficient for internal process management. The most significant finding involved universal
requirements for human verification despite AI accessibility benefits, creating capability demands that
potentially exceeded SME resources. The research contributes a Multi-Factor Alignment Framework
organizing success factors across technical, organizational, and environmental dimensions. The study
concludes that LLM-powered BPM represents a transformation from operational tool to communication
medium, requiring hybrid approaches leveraging AI for communication while maintaining traditional methods
for operational requirements.

Key words: Business Process Modelling, Large Language Models, SME Digital Transformation,
AI Adoption, Process Management, BPMN.

Thesis Lola Bonnaudet

Subject: Employee-AI Collaboration

Title: Socio-Technical Factors Shaping Employee-AI Collaboration

Abstract: 

Despite significant organisational investments in AI technologies, 70-85% of AI initiatives fail to achieve their projected value due to inadequate understanding of the human factors driving successful employee-AI collaboration. Using a socio-technical systems perspective to analyse intricate relationships between technological, organisational and individual components, this study explores the socio-technical characteristics that impact employee-AI collaboration in digital workplaces. This study examines five key factors: AI literacy, AI explainability, organisational support, task variety and paradoxical tensions. A quantitative cross-sectional survey was conducted at Doctolib, a European healthcare technology company that implemented enterprise-wide AI capabilities in February 2025. Data from 73 employees were analyzed using PLS-SEM to test relationships between five socio-technical factors and employee-AI collaboration outcomes. The key finding demonstrates that AI literacy serves as the most critical factor in determining collaborative success (β = 0.498, p = 0.016), providing empirical validation of Wang et al.’s (2022) four-dimensional AI literacy framework in a real organisational setting. However, contrary to theoretical expectations, AI explainability, task variety and paradoxical tensions showed no significant relationships with collaboration. Most surprisingly, organisational support failed to demonstrate any moderating effects, challenging traditional technology acceptance frameworks. The research validates AI literacy theory whilst challenging dominant narratives in explainable AI research. For practitioners, the findings suggest that organisations should prioritise comprehensive AI literacy programs addressing awareness, usage, evaluation and ethics dimensions rather than focusing on traditional organisational support mechanisms. This research demonstrates that successful employee-AI collaboration requires AI-specific approaches to implementation and capability development, highlighting the need for new theoretical frameworks that move beyond conventional technology adoption models.

Key words: Employee-AI collaboration, socio-technical systems, AI literacy, organisational support, PLS-SEM

Thesis Xavier Kasdan

Subject: Information System Science

Title: Onboarding Business Domains into a Data Mesh: A Kotter-Based Change Enablement Framework Tested at Toyota Motor Europe

Abstract: 

As organisations adopt decentralised data architectures like Data Mesh, many struggle to operationalise new roles such as Data Product Owner or Domain Data Steward. While technical aspects are well-documented, the organisational and behavioural dimensions, particularly the onboarding of business stakeholders, remain underexplored. This thesis investigates how large enterprises can enable successful role adoption during a Data Mesh transformation. Based on a qualitative case study at Toyota Motor Europe and using abductive reasoning, the study proposes the Data Mesh Change Enablement Framework, an adapted eight-step change model inspired by Kotter’s theory but tailored to decentralised contexts. The framework emphasises contextualised urgency, multi-level coalitions, co-created role narratives, peer-driven acceleration, and institutional anchoring. Grounded in empirical data, the framework offers a practical yet flexible tool for guiding change in large-scale data transformations.

Key words: Data Mesh, Change Management, Decentralised Data Governance, Business Stakeholder Onboarding, Data Product Owner, Federated Architecture, Kotter’s 8-Step Model, Qualitative Case Study, Toyota Motor Europe.