How Chevron is using gen AI to strike oil

How Chevron is using gen AI to strike oil


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

Oil and gas operations generate an enormous amount of data — a seismic survey in New Mexico, for instance, can provide a file that is a petabyte all by itself. 

“To turn that into an image that you can make a decision with is a 100 exaflop operation,” Bill Braun, Chevron CIO, told the audience at this year’s VB Transform. “It’s an incredible amount of compute.”

To support such data processing, the multinational oil and gas company has been working with GPUs since 2008 — long before many other industries required, or even considered, that type of processing power for complex workloads. 

coinbase

Now, Chevron is taking advantage of the latest generative AI tools to derive even more insights, and value, from its massive datasets. 

“AI is a perfect match for the established, large-scale enterprise with huge datasets — that is exactly the tool we need,” said Braun. 

Deriving insights from Permian Basin data

But it’s not just the individual companies sitting on enormous (and ever-growing) data troves — Braun pointed to the Permian Basin Oil and Gas Project in west Texas and southeastern New Mexico. 

Chevron is one of the largest landholders of the Basin, which is roughly 250 miles wide and 300 miles long. With an estimated 20 billion barrels remaining, it comprises about 40% of oil production and 15% of natural gas production in the U.S. 

“They’ve been a huge part of the U.S. production story over the last decade or so,” said Braun. 

He noted that the “real gem” is that the Railroad Commission of Texas requires all operators to publish everything that they’re doing at the site. 

“Everything’s a public record,” said Braun. “It’s available for you, it’s available for your competition.”

Gen AI can be beneficial here, as it can analyze enormous amounts of data and quickly provide insights. 

Overall, the publicly-available datasets “turned into a chance to learn from your competition, and if you’re not doing that they’re learning from you,” said Braun. “It’s an enormous accelerant to the way that everyone learned from each other.”

Enabling proactive collaboration, keeping humans safe

Chevron operates in a large, distributed area, and while there is good data in certain places, “you don’t have it across the entire expanse,” Braun noted. But gen AI can be layered over those various data points to fill in gaps on the geology between them. 

“It’s the perfect application to fill in the rest of the model,” he said. 

This can be helpful, for instance, with well lengths, which are several miles long. Other companies might be working in areas around those wells, and gen AI could alert to interference so that human users can proactively reach out to prevent disruption to either party, Braun explained.

Chevron also uses large language models (LLMs) to craft engineering standards, specifications and safety bulletins and other alerts, he said, and AI scientists are constantly fine-tuning models. 

“If it’s supposed to be six exact constructions, we don’t want our generative AI to get creative there and come up with 12,” he said. “Those have to be tuned out really tight.”

Braun’s team is also evaluating the best ways to inform models when it comes to geology and equipment so that, for instance, AI could generate a guess on where the next basin might be. 

The company is beginning to use robotic models, as well, and Braun sees a “tremendous application” when it comes to safety. 

“The idea is to have robots do the dangerous job, and the humans are safely staying away and ensuring the task is being performed well,” he said. “It actually can be lower-cost and lower-liability by having the robot do it.”

Blurring the lines between previously disparate teams

Teams on the ground and teams in the office have often been siloed in the energy sector — both physically and digitally. Chevron has worked hard to try to bridge this divide, Braun explained. The company has embedded teams together to blur the lines. 

“Those to me are the highest performing teams, is when the machine learning engineer is talking about a problem with a pump, and the mechanical engineer is talking about a problem with the algorithm and the API, you can’t tell who’s who,” he said. 

A few years ago, the company also began sending engineers back to school to get advanced degrees in data science and system engineering to refresh and update their skills. Data scientists  — or “digital scholars” — are always embedded with work teams “to act as a catalyst for working differently.”

“We crossed that traverse in terms of our maturity,” said Braun. “We started with small wins and kept going.” 

Synthetic data, digital twins helping to reduce carbon outputs

Of course, in energy, as in every sector, there is huge concern around environmental impact. Carbon sequestration — or the process of capturing, removing and permanently storing CO2 — is increasingly coming into play here, Braun explained.  

Chevron has some of the largest carbon sequestration facilities on the planet, Braun contended. However, the process is still evolving, and the industry doesn’t completely know how the reservoirs holding captured carbon will perform over time. Chevron has been performing digital twin simulations to help ensure that carbon stays where it’s supposed to, and generating synthetic data to make those predictions.

The incredible amount of energy used by data centers and AI is also an important consideration, Braun noted. How to manage those often remote locations “as cleanly as possible is always where the conversation starts,” he said.



Source link

[wp-stealth-ads rows="2" mobile-rows="3"]

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest