Posted on Semiconductor Engineering: Click here to view original article
By: Christophe Begue
The limitations of traditional collaboration
For decades, semiconductor collaboration has been predominantly human-centric, operating through two primary modes:

Stage-Gate Handoffs: The foundry-fabless relationship exemplifies this model. Foundries collect hundreds of terabytes of wafer fabrication data, distill it into gigabyte-sized Process Design Kits (PDKs), and hand off these summarized packages at defined milestones. Information flows linearly, with clear versioning (0.5 PDK versus 1.0 PDK) that both parties understand.
Crisis-Driven Engagement: When problems arise, collaboration intensifies. More data gets shared, response times accelerate, and organizations work together to resolve immediate issues. But this reactive approach doesn’t scale to the complexity emerging in modern semiconductor manufacturing.
This model worked well in a monolithic silicon era, where the wafer fab represented the single critical control point. Once dies passed wafer sort with acceptable yields, the complexity largely ended. But that world is disappearing.
The 3D revolution and supply chain complexity
Today’s semiconductor roadmap, driven by innovations from organizations like IMEC and AMD, is fundamentally three-dimensional:
- Front-end 3D integration through advanced node scaling
- Back-end 3D packaging with chiplet architectures
- Multi-vendor chiplet integration requiring unprecedented coordination
- Explosion of test insertion points throughout the manufacturing flow
- Tighter assembly process tolerances across multiple parties

This complexity manifests in several ways. Assembly operations now rival wafer fabs as critical control points. Known-good-die from multiple suppliers become interdependent. Organic substrates and their components join the list of yield-critical elements. There is no longer a single point of control; every link in the chain matters equally.
Meanwhile, operations leaders face a productivity paradox: they’re being asked to manage exponentially more complexity without adding headcount, based on expectations that “AI will make you more efficient.”
The three pillars of AI-driven collaboration
Kibarian outlined three foundational elements required for this transformation: a secure data infrastructure, automated orchestration and AI agents.

1. Secure data infrastructure
PDF Solutions’ acquisition of SecureWise (formerly known through its Telit ownership, originally started by IBM) provides a glimpse into the scale of data exchange already occurring. The network now transmits exabytes of data annually between fabs and OEMs, volume that began exploding with EUV adoption in 2019 and accelerated through COVID-19.
The infrastructure currently connects over 300 manufacturing locations and more than 100 OEMs and a growing number of OSATs.
But this isn’t simply about transferring files. The network operates as a private infrastructure off the public Internet, with end-to-end secure access across private networks, virus scanning for equipment software updates and granular permission controls down to individual engineers, specific tools, and even particular buttons on equipment interfaces.
The value proposition splits clearly: fabs gain uptime and operational efficiency while managing security protocols rigorously; OEMs increase service revenue and can scale software-driven services globally without proportional headcount increases; echoing, Kibarian suggested, the difference between Alibaba Bank’s automated lending (exponential growth) and Bank of America’s human-intermediated model (linear growth).
Critically, the network includes edge computing capabilities at test cells, enabling data feed-forward architectures. Features extracted from upstream test data can inform downstream decisions in real-time, a capability increasingly deployed by fabless companies on their most complex products.
2. Automated orchestration
Data alone provides limited value, unless context and structure transform data into actionable intelligence. Orchestration addresses a fundamental challenge: different systems speak different languages and serve different organizational functions.
The way work-in-progress (WIP) appears in an ERP system differs fundamentally from how it is represented in a Manufacturing Execution System (MES). An engineer in design shouldn’t have direct ERP access, yet they need costing information derived from ERP data. Finance needs to understand manufacturing realities without accessing engineering or manufacturing systems directly.
An example is PDF Solutions’ partnership with SAP, which has been ongoing for approximately four years. ERP systems need real-time manufacturing data, actual tool time, gas consumption, and process flows to generate accurate costing estimates. This can only be achieved with an orchestration of real-time manufacturing data properly aligned, summarized, and processed to create the type of product costing information SAP can absorb. Conversely, AI applications used in test need shipping information from ERP systems to optimize data feed-forward strategies.
Orchestration enables several critical capabilities:
Product Costing: to assess real cost by monitoring actual consumption of resources and equipment usage.
Quality Management and RMA: Integrating factory genealogy data with field failure information in real-time, enabling faster root cause identification and material containment.
WIP Management: Real-time production control with MES integration, identifying bottlenecks and improving on-time order fulfillment.
The approach uses low-code/no-code infrastructure, making it accessible beyond specialized data science teams.
3. AI agents with human governance
Current analytics utilization is strikingly low. PDF Solutions’ analysis of its cloud-based Exensio system reveals that fabs examine only about 5% of stored manufacturing data, and that figure generously represents the average; actual utilization may skew lower. The reason: data scientists spend approximately 80% of their time on data wrangling, aligning disparate sources and reconstructing context information that exists elsewhere in MES or ERP systems but isn’t readily accessible.

AI agents offer a path forward, but with an important governance structure:
Humans Define the Rules: Through orchestrations, organizations establish what data can be shared across organizational boundaries, what quality standards must be met before analysis proceeds, and what collaboration protocols govern cross-company interactions. This is particularly important for data quality, which proves context-dependent—data sufficient for single-operation analysis may be inadequate when combining information across organizational units.
AI Executes at Scale: Within these boundaries, agents operate autonomously. They’re awake 24/7, can examine 100% of the data 100% of the time, and handle high-volume tasks without human bottlenecks.
A recent example from Renesas illustrates the progression. Tracking approximately 2,000 products, Renesas deployed Guided Analytics to automatically detect yield deviations, perform first-level root cause diagnostics, collect and clean relevant data, and present it to engineers for deeper analysis. The system automated up to 90% of the analysis work, enabling engineers to monitor all products across all operations continuously. This represents “first-level” agent deployment, making engineers more productive.
The next evolution, already underway, removes humans from the loop entirely. In predictive testing, predictive burn-in, and predictive binning applications, agents at one manufacturing stage extract features and communicate directly with agents at downstream stages, adjusting test intensity or burn-in duration without human intervention.
Strategic implications
The transformation Kibarian describes isn’t merely technological; it’s organizational and economic:
Supply Chain Accountability: Fabless companies that once disclaimed yield responsibility (“the foundry handles yield; we’re just designers”) now have operations SVPs accountable for yield outcomes in complex, multi-party supply chains. There’s no single critical control point to blame or rely upon.
Operational Expectations: The industry increasingly expects operations to absorb complexity growth without proportional headcount increases. According to HFS Research, six out of ten enterprises plan to replace people-run services with software-run services before 2030.
Strategic Infrastructure: Governments now view semiconductors as strategic assets, driving fab construction in regions without deep semiconductor expertise. Remote connectivity and AI-driven operations become essential for running these facilities effectively.
Competitive Dynamics: Organizations that successfully implement AI-driven collaboration can scale exponentially rather than linearly. The industry’s consolidation, reducing connection points to hundreds rather than thousands, actually facilitates this transformation.

Conclusion
The semiconductor industry has always thrived on collaboration and innovation. What’s changing is the context in which that collaboration occurs. Moving from periodic, human-mediated stage gates to continuous, AI-orchestrated operations represents as significant a shift as the move from 2D to 3D architectures in the technology itself, the latter exponentially increasing the need for the former.
The infrastructure is emerging: secure data networks spanning hundreds of sites, orchestration layers bridging organizational systems, and AI agents operating within human-defined governance frameworks. The question facing industry leaders isn’t whether this transformation will occur, but how quickly their organizations can adapt to operate in this new paradigm—and whether they’ll be positioned among those scaling exponentially or struggling to keep pace with linear growth models in an exponential world.
The keynote presentation slides can be downloaded here and the video viewed here.