AI: Between Technological Gadget and Organizational Revealer
In 2026, AI is everywhere in organizations. Assistants, predictions, automation, recommendations: the tools are accessible, powerful, and relatively straightforward to deploy. The use of AI in project portfolio management processes is no exception however, behind this rapid adoption, a clear divide is emerging. On one side, there is AI used as a gadget, added at the surface, disconnected from real decision-making processes. On the other, there is AI embedded in governance, steering, and arbitration mechanisms where value is truly created (or destroyed). This divide is not always understood by organizations and can sometimes lead to disappointment with AI's ROI. Yet it cannot be said that AI is failing: it is doing exactly what it is asked to do, within a system that does not yet know clearly what it is trying to optimize.
It cannot be said that AI fails in less mature organizations: it is doing exactly what it is asked to do, within a system that does not yet know clearly what it is trying to optimize. Without maturity in portfolio management, without explicit trade-offs, without consistent project data and without decision governance, AI cannot produce lasting performance. It then becomes a revealer, sometimes a brutal one, of existing weaknesses, and a costly investment deployed too early in the transformation process. AI is not a silver bullet: it is an additional tool. And like any tool, it amplifies the strengths… or the flaws of the system into which it is introduced.
One AI, Two Organizational Realities
In a less mature organization, AI is introduced while strategic trade-offs remain vague or unstable. Priorities shift, decisions are barely formalized, and project governance is fragmented. AI analyzes the available data, optimizes according to the objectives it is given, and surfaces inconsistencies… without the organization being equipped to resolve them. The data is heterogeneous because the processes are too; decisions remain largely political or intuitive and change management is limited to tool adoption. Here, AI is used for individual convenience: faster drafting of meeting minutes, automatic meeting summaries, generic responses via chatbots integrated into collaborative tools. These uses deliver real time savings on administrative tasks. This is confirmed by the PMI, which observes a productivity improvement reported by 93% of intensive GenAI users, versus 58% among low adopters. But it is a marginal productivity tool that keeps the organization in "doing things faster" mode and never in "doing things better" since AI is not connected to key decisions. No prioritization, no resource trade-offs, no alerts on structural drift, due to the absence of explicit rules and usable data. Teams produce more, faster, without this improving the quality of decisions or overall performance. This use of AI feeds a growing frustration: it exposes what is not working, without the organization knowing what to do to fix it, because it lacks either the framework or the will to correct it.
What this first case reveals is not a failure of AI, but a structural limitation: as long as AI remains confined to peripheral uses, it can only produce local gains. The question is therefore not "why isn't it working" but "under what conditions does AI stop being marginal and become a real performance lever."
Conversely, in a mature organization, AI is integrated at the heart of steering mechanisms, not at the margins. Strategic trade-offs are explicit, translated into rules for prioritization, resource allocation, and risk management. Project data is consistent because it is already being used to make decisions, not just to produce reports. Governance clearly defines what AI illuminates, what it optimizes, and what remains the responsibility of humans. Change management is not about the tool, but about the evolution of decision-making practices: how to integrate an alert, how to arbitrate faster, how to accept a recommendation based on data rather than intuition alone.
In this context, AI is used where it creates a real differential: predicting project drift, simulating scenarios to arbitrate between value, risk and capacity, portfolio prioritization based on shared criteria, decision support integrated into PPM workflows. It is not an isolated chatbot, but an analytical layer plugged into existing processes. This is precisely what PMI highlights: organizations classified as trailblazers (i.e. those that genuinely integrate AI into their professional practices) report an 89% improvement in their ability to solve complex problems, versus only 46% among less mature adopters, and 89% versus 46% as well on overall team effectiveness.
In other words, the difference does not come from the AI itself, but from its place in the decision-making system. In a mature organization, AI does not bring "more tools": it reduces noise, accelerates trade-offs, and transforms complexity into an operational advantage.
What High-Performing AI Has in Common with Strong Project Maturity
When observing organizations that extract real value from AI in project and portfolio management, one observation holds consistently: the same foundations recur systematically. These are not technological prerequisites, but organizational capabilities already well established in mature project environments.
1. Explicit, Not Implicit, Decision Governance
Useful AI assumes that the organization knows who decides what, based on which criteria, and at what moment. Without this, AI can only optimize locally or produce recommendations without real authority. Mature organizations have: formalized arbitration rules (prioritization, capacity, value, risk), identified decision-making bodies, and a clear link between strategy and portfolios. PMI studies show that the organizations most advanced in AI usage are also those with structured portfolio steering and traceable decisions, an critical condition for AI recommendations to be genuinely actionable. Without clear governance, AI illuminates… the deficiencies, not what can actually be arbitrated.
2. Data Quality as a Condition for Success
Data quality is not optional: it is a sine qua non condition for reliable AI prediction and analysis. According to Gartner, "AI-ready data" (AI-specific data management best practices) is compulsory, and 60% of AI projects without ready data will be abandoned by 2026. This implies active data governance (quality, metadata, documentation) before even considering AI modeling. If project data is not structured and reliable, AI simply produces misleading alerts or worthless predictions which often leads to the abandonment of initiatives. Mature organizations already use their data to: manage commitments, compare projects, measure variances, and learn from the past. This is precisely what makes AI effective: it relies on comparable, historicized, and governed data and not on fields filled in opportunistically.
3. Processes Structured Enough to Be Amplified
AI functions poorly in chaos… and uselessly in hyper-rigidity. Mature organizations have project processes that are: defined, shared, and genuinely applied. This allows AI to intervene in the real steering flow, not alongside it. Multiple studies show that automation or AI integrated into workflows struggles to produce value if processes are not defined, shared, and measured. For example, a Capterra study indicates that 37% of users cite lack of data quality and processes as a primary obstacle to AI in project management. When processes are not stabilized, AI cannot embed itself in steering routines, it remains peripheral and does not condense decision-making. AI is not a magic tool; it does not structure processes: it takes advantage of those that already exist.
4. Clear Alignment Between Strategy, Portfolios, and Indicators
A mature organization knows what it is trying to optimize: business value, time-to-market, risk, capacity, compliance, profitability. AI becomes effective when it is connected to these explicit choices, not to vague or contradictory objectives. As Harvard Business Review analyzes in "Why Strategy Execution Unravels and What to Do About It", technologically high-performing organizations are those where tools — AI included — are directly tied to strategic objectives measured at the portfolio level. AI does not help choose a strategy. It helps execute one that is already clear.
5. Change Management Centered on Practices, Not the Tool
Finally, mature organizations do not "deploy" AI: they evolve decision-making practices. Change management focuses on: how decisions are made, how alerts are handled, and how AI recommendations are integrated into management routines. Studies on the adoption of advanced technologies show that the absence of change management is one of the primary causes of failure, even when the technology itself is performing. Without transformation of practices, AI remains a peripheral tool.
In our article PMO Challenges in 2026: 9 Experts Share Their Perspectives, Americo Pinto, General Director of PMOGA at PMI, notes that AI will amplify everything the PMO already is in 2026. In mature environments, it can sharpen clarity and decision support; in weaker environments, it can simply accelerate the noise. The real challenge is not to adopt AI, but to ensure the organization is ready to convert delivery into value — because value will not be a by-product of automation; it will remain a product of adoption and trust. Many PMOs still confuse service delivery and value realization, and even good results can be overlooked when expectations evolve faster than the PMO's narrative. Successful PMOs will use AI to enhance their vision, while actively defining value in a way that leaders recognize.
AI Tools Are Not the Problem. Organizational Courage Is.
Let's be clear: today, no organization really has a choice. AI has become mandatory driven by competitive pressure, market expectations, and strategic anticipation. Securing an AI budget, launching initiatives, deploying tools: all of this has become almost routine. And that is a good thing. But deploying AI is absolutely not enough.
Without foundational work on governance, decision-making processes, project and product data quality, and the way teams actually make trade-offs, AI is just another software layer. It adds complexity where the organization already struggles to decide, prioritize, and execute. It accelerates what exists, including what does not work.
The real issue is therefore not technological. It is functional and structural. AI cannot clarify vague governance. It cannot substitute for trade-offs that have never been assumed. It cannot fix incoherent processes or an organization that confuses steering with reporting. It cannot do the work of change management in place of management.
Addressing these problems, however, changes everything. Once the rules of the game are established, who decides, on what criteria, with what data, and how change is managed, AI becomes a performance multiplier. It accelerates trade-offs, improves the quality of decisions, and transforms complexity into a real competitive advantage.
The question is therefore not "How can AI bring more value, more productivity, more ROI?" but rather "Is our organization structurally ready to extract its true value?"
This is precisely the gap that Planisware's AI & Project Economy Barometer seeks to make visible. Not to judge the level of tooling, but to help organizations answer a far more structural question: where do we genuinely stand, compared to other players in our market, on the conditions that allow AI to produce lasting value? Understanding one's position, benchmarking against the market, to know where to transform and act where the impact will be real.