Michelin AIOps strategy did not start with a bold executive mandate or a sweeping transformation roadmap. Instead, it emerged from a growing operational pressure inside the company’s China IT operations team, where incident volumes continued to rise despite mature monitoring, telemetry, and cloud foundations.
Rather than waiting for top-down approval, Michelin’s team chose to experiment quietly, proving value through working systems before seeking formal endorsement. That decision shaped an AIOps journey focused on pragmatism instead of vision statements.



Why Michelin needed a different AIOps approach
The challenge was not a lack of tooling. Michelin already operated established incident management processes, cloud-hosted platforms, and standardized telemetry pipelines. However, manual checks and reactive workflows kept expanding alongside system complexity.
Matthew Liu, an architect in Michelin’s China IT operations group, recognized that incremental process optimization alone would not solve the problem. The timing for change was obvious, even if executive sponsorship was not yet in place.
This gap between operational maturity and operational load became the catalyst for action.
Michelin AIOps strategy begins at the grassroots
Unlike many enterprise AIOps initiatives, Michelin’s effort began with individual conviction. Liu built working prototypes before asking for permission, focusing on problems operators faced every day.
One early prototype supported database administrators by automating health checks and slow query analysis. Another assisted Kubernetes administrators with routine operational tasks. Both tools demonstrated immediate value without requiring large-scale platform changes.
As a result, conversations shifted from abstract potential to tangible outcomes.
Building with low code and modular design
The team selected Dify, a low-code AI application platform, deployed inside Michelin’s existing AliCloud landing zone. Integration relied on Anthropic’s Model Context Protocol, which allowed AI agents to interact with external systems such as ServiceNow, GitHub, and cloud resources.
This modular architecture separated three layers. The application builder layer could be replaced. The reasoning layer remained flexible. The tool integration layer used MCP servers to connect operational systems.
Because of this separation, the platform aligned with existing security controls while remaining adaptable.
Overcoming organisational resistance
Early demonstrations attracted interest but also surfaced concerns. Teams hesitated to share productivity metrics, fearing that automation might lead to headcount reductions or unrealistic performance targets.
Without reliable KPIs, Liu reframed the initiative. Instead of positioning AIOps as an efficiency mandate, he presented Dify as a low-code exploration environment where operations teams could encode their own knowledge into workflows.
This shift reduced anxiety while building AIOps literacy across teams.
Governance before scale
IT management initially questioned whether experimentation solved real operational problems. However, they approved the underlying architecture because it respected governance boundaries.
Data classification rules were defined upfront. Core business secrets were excluded from AI workflows. The platform reused existing security components inside the validated cloud environment.
Only after these safeguards were in place did leadership request clearer value articulation.
Flagship use cases prove value
Two use cases eventually became focal points. Externally, the team worked with a vendor that performed periodic manual checks. AI agents automated these checks, reducing repetitive effort and error risk.
Internally, the database administrator chatbot gained traction after DBAs saw it working in practice. Adoption followed demonstration, not mandates.
These successes helped bridge the gap between experimentation and production readiness.
Industry perspective on AIOps adoption
Analysts consistently warn that AIOps success depends on organisational alignment, not just tooling. Research highlights that effective deployments involve multiple departments and address real operational pain points.
Skills shortages remain a challenge. Many enterprises lack deep machine learning expertise, making platforms that reduce complexity more attractive.
Michelin’s approach aligned with these realities by lowering the barrier to experimentation.
Security concerns around MCP adoption
Model Context Protocol adoption is accelerating rapidly across the industry. Enterprises see value in connecting AI agents to operational tools. However, security researchers have raised concerns about prompt injection, permission escalation, and tool impersonation.
Industry leaders argue that MCP must become governable and observable at enterprise scale. Without controls, organisations risk creating shadow agents similar to early shadow IT patterns.
Michelin addressed this risk by embedding MCP usage inside existing governance structures.
Why the Michelin AIOps strategy stands out
The defining trait of the Michelin AIOps strategy is restraint. The team avoided grand claims and instead focused on learning what worked through safe, inexpensive experimentation.
By aligning local innovation with global IT governance, Michelin avoided fragmentation while still solving real operational problems.
This balance allowed progress without disruption.
Final thoughts
Michelin AIOps strategy demonstrates that successful AIOps adoption rarely starts with bold vision decks. It begins with practical experiments, careful governance, and trust built through working systems.
Rather than chasing vendor promises, Michelin validated AIOps incrementally. The result is a platform that is already running in production, aligned with security requirements, and grounded in real operational needs.
For enterprises navigating AIOps adoption, the lesson is clear. Start small, learn fast, and scale only after value is proven.
Read also
Join the discussion in our Facebook community.