This is an abridged version of a keynote presentation delivered at the CIIT Latam Congress 2026 in Lima, Peru.
One hundred years ago, Nikola Tesla described a world where wireless technology would convert the entire earth into a giant brain, where people would communicate instantly across thousands of miles using devices small enough to carry in a vest pocket. Audiences at the time found this almost impossible to picture. We are living through a comparable moment with artificial intelligence, and the mining and energy industries have the opportunity to position themselves at the leading edge of what comes next.
The scale of AI’s advancement in the past decade alone is difficult to grasp. Nvidia’s first deep learning system, the DGX-1, delivered to a startup called OpenAI in 2016, produced roughly 1 PetaFLOP of compute. The Rubin NVL72 system available in 2026 delivers more than 3,600 PetaFLOPs. Training time for models has compressed from months to days. Cost per token has dropped from over $200 per million tokens in 2016 to around $0.10 today. That 99% reduction in the cost of generating intelligence changes the economic calculus for every industry generating large volumes of operational data, and few industries generate more of it than mining.
Intelligence as a manufactured resource
The most useful frame for understanding AI’s trajectory is to treat intelligence as a manufactured resource. Tokens, the units of AI output, are the raw material. Scale is the competitive advantage, in much the same way throughput and yield define competitiveness in ore processing. AI factories, as Nvidia describes them, manage the full AI lifecycle from data ingestion to inference serving, producing intelligence from proprietary data in a continuous loop. For mining and energy companies sitting on decades of geological records, sensor logs, maintenance histories, and operational data, building that loop is the core strategic question of this decade.
What early movers have already demonstrated
Leading companies have already shown that AI works at industrial scale. Boliden’s Self-Learning Concentrator deploys AI-driven process control and predictive maintenance across its processing operations, improving recovery rates and reducing energy intensity. BHP takes an integrated approach that links exploration data directly to processing decisions: an AI-enabled data platform accelerates geological interpretation, digital twins use ore condition data to inform processing choices in near-real time, and computer vision systems detect hazardous materials before they trigger unplanned downtime. Vale’s Mine of the Future concept centers on smart operations, using autonomous equipment and IoT systems to optimize production and safety across the mine.
In the energy sector, Zanskar’s AI engine for geothermal exploration aggregates geological and geophysical data before any on-the-ground validation, identifying previously invisible sites. In 2025, this approach uncovered a previously unknown geothermal resource in Nevada. Duke Energy’s self-healing grid uses AI to reroute power around faults in under 60 seconds, often before a human operator sees the alert, restoring supply to 75% of affected customers in under a minute and helping avoid more than 280,000 extended outages in Florida in 2025 alone. Vistra built a neural network that learned two years of historical operational data to continuously optimize heat rate efficiency across its thermal power plant fleet. A 1% efficiency improvement translates to millions of dollars in fuel costs and thousands of tons of CO2, so the stakes on either side of that number are substantial.
The gaps that matter
The harder conversation in industrial AI concerns the distance between a working prediction and reliable performance at scale – the two problems of safety and logic compound each other. There is also a fear that proprietary techniques or trade secrets will “leak” into public training models.
A trust deficit is created when operators can’t trust an AI that can’t explain its recommendations. Operators are right to flag safety and cyber-physical security, as the attack surface for hackers moves from software to physical equipment and infrastructure. When an AI system encounters a physical scenario it was trained on too few examples of, the consequences of a wrong recommendation involve machinery, infrastructure, and people, not a correctable software output. Human-in-the-loop architectures and digital sandboxes for simulation testing address part of this, and deployment at scale will need adequate investment to ensure safety and build trust.
Large language models can struggle with basic spatial reasoning and common-sense logistics, which raises an obvious question about their reliability in complex tasks like managing the fluid dynamics of a concentrator plant. Physics-informed AI and retrieval-augmented generation are emerging approaches that constrain model outputs to domain-specific physical laws rather than general-purpose inference. Both require investment to implement well.
Interpretability sits underneath both concerns. Operators who cannot understand why an AI system made a recommendation will override it or ignore it, regardless of the system’s track record. Building explainability into industrial AI is a prerequisite for adoption, and it deserves as much engineering attention as accuracy.
Accelerating from pilot to performance
Mining companies don’t need to feel overwhelmed at the AI journey in front of them. Targeting high-yield data streams and building modular proofs of value, each solving a single well-defined problem with the data pipeline as a reusable foundation, consistently outperforms attempts at comprehensive transformation. Running AI systems in shadow mode alongside human operators, validating recommendations before granting authority to act, lets organizations build trust without betting the operation on an untested system. The target? Pilot to performance in 90 days.
Companies investing now in connectivity, data infrastructure, and organizational fluency will reach that target with the advantage of accumulated experience. Those waiting will find the gap harder to close with each passing cycle of model improvement.
One framing choice shapes how well AI adoption lands internally. Leadership teams that position these tools as amplifiers of operator expertise, extending what skilled people can perceive and decide rather than replacing their judgment, tend to see faster uptake and fewer workarounds.
The expertise that makes these operations function is still the thing being leveraged – AI is simply the tool to give it wider reach.





