Skip to main content
 

PROUDLY SPONSORED BY

Visit our exhibitors

Energy in Data Event - Unleashing the Power of Digital for Oil & Gas and New Energies

Organized By:

Maria Angela Capello, George Koperna, Aria Abubakar, and Susan Nash

Schedule

Download schedule at a glance pdf

Sunday, 20 February 2022

RIO GRANDE EXHIBIT HALL A
RIO GRANDE EXHIBIT HALL A

Monday, 21 February 2022

RIO GRANDE EXHIBIT HALL A
RIO GRANDE EXHIBIT HALL A
GRAND BALLROOM A
 
U TX Austin
 
SEG
 
AAPG

The use of data analytics and AI has accelerated especially since 2020 and is driving the future of energy. There are now numerous case histories from many oil and gas companies around the world demonstrating how data science and AI can improve efficiencies and lower costs. Increasingly, some of these learnings, from the oil and gas industry, are being extrapolated to other energy transition topics such as geothermal energy, CCS, and ESG/sustainability.

 

Speaker
Silviu Livescu
 
U TX Austin
Ken Tubman
 
SEG
Steve Goolsby
 
AAPG
Speakers
Trinity Lloyd
 
Google

Google's sustainability journey, the creative ways we use and reuse data for deeper, layered insight, why fostering a culture of innovation and collaboration can solve big challenges, the changing landscapes in technology and energy and the exciting opportunities this presents, and steps to accelerate meaningful impact and shareholder value

MODERATOR
Matthew Reilly
 
HESS
Speakers
Haibin Di
Schlumberger

As a key role in the circle of reservoir field exploration and development, building accurate subsurface models is a complicated and time-consuming process, which usually requires intradisciplinary collaboration between geoscientists, geologists, and petrophysicists to efficiently integrate various types of subsurface data, including seismic and well logs as well as intermediate interpretations of structures and stratigraphies. While plenty of efforts have been made on deep learning (DL)-based subsurface data analysis, it is observed that most of these approaches are simply from seismic amplitude and well logs and fail to efficiently integrate many other information, such as seismic derivatives, attributes, and/or interpretations that experienced interpreters would explicitly generate for their model building. In this paper, we propose accelerating the process of subsurface modeling on the Groningen gas field in the Netherlands by integrating a suite of DL models, which enable robust well log QC, accurate relative geologic time reconstruction, automated seismic structure interpretation, and integrated property estimation and validation, respectively. The estimated density, velocity and porosity models are observed in good match with the documented geology in the area particularly the Zechstein salt, the complex fault system, and the Rotliegend reservoir, which successfully verifies the potential of the proposed workflow in bridging available data with robust models for reliable subsurface interpretation and characterization.

 

Speakers
Scotty Salamoff
Bluware

Detailed geomodelling within high-resolution, threedimensional (3D) seismic data is a time-consuming and arduous process. However, recent advances in deep learning practices are accelerating the speed at which geologic features can be mapped. While most geoscientific deep learning applications have focused on mapping features such as faults and salt, we propose a novel, interactive deep learning methodology that enables the interpreter to characterize a petroleum system by labeling and training networks on associated elements proven by exploration well data. This study uses available data from the complex Central Graben Basin within the North Sea, which contains many producing fields. The F3 seismic survey contains several seismic representations of petroleum system elements such as migration chimneys and dry gas shows. Dry gas migrates vertically through overlying strata and along faults. Results from well-trained deep learning networks can accurately map various petroleum elements of the basin, which is traditionally very challenging and time-consuming. These results were obtained in a fraction of the time compared to traditional interpretation workflows and enables geoscientists to better characterize regional trends while also making observations at the petroleum system scale.

This panel discusses the way legacy data can be stored, accessed and utilized. Further, the types of legacy data are evaluated and strategies for transforming it so that it can be stored, used, and integrated with other data are considered. Sometimes legacy data can be a storehouse of value.

Keynotes
Pushpesh Sharma
Aspen Technology

I’ll start with a brief discussion about the need for utilizing legacy data. I’ll mention current challenges with legacy data then end with some potential use cases for utilizing legacy data.

 

Keynotes
John T. Foster
U TX Austin

Scientific Machine Learning or SciML is a relatively new phrase that is used to describe the intersection of data science, machine learning, and physics-based computational simulation. SciML encompasses many ideas including physics informed neural networks, universal differential equations, and the use of synthetic data generated from physical simulators in training machine learning models for rapid decision making. This talk will give an overview of SciML using simple examples and discuss recent results from our investigations using SciML in petroleum engineering applications.

 

Panel
Scott Granger
Applied Petroleum Technology
Hani Elshahawi
NoviDigiTech
Philippe Herve
 
SparkCognition
SABINE
 
Advanced Resources International
 
Chevron
 
IBM
Moderator
Jalal Jalali
 
Advanced Resources International

 

Speakers
Andrei Popa
 
Chevron
Ben Amaba
 
IBM

There has been rapid growth in the application of machine learning techniques within the oil and gas industry to solve different problems from geological data interpretation to production data analysis. Many of these techniques have seen a lot of maturation because of the extensive amount of data to test their validity in real world problems. This session explores some of these techniques that can be adapted for CCUS.

Moderator
Matthew Reilly
 
HESS

 

Speaker
Hamed Darabi
 
QRI

Over the last decades conventional decline curve analysis has been a common method to forecast hydrocarbon well production. However, this technique is often faced with inefficiencies in reliably predicting hydrocarbon production. This is partially due to its inability to predict multiple phases in a single model, and to incorporate operational changes and subsurface fluid flow mechanisms.

In this paper, we propose a deep learning framework that addresses the limitations of traditional parametric DCA. This was achieved by using an Encoder-Decoder Long Short-Term Memory Networks architecture. The model combines three types of inputs consisting of time-variant information (i.e., historical production), static well features (e.g., geology and spacing parameters), or known-in-advance control variables (e.g., artificial lift type as a function of time), and the output is a multi-step forecasts for oil, gas, and water rates. We applied the model to a dataset of 213 gas wells from the Eagle Ford Basin, with the goal of predicting the gas and water rates. Production history ranges from 11 to 52 timesteps (mean 31) and is supplemented with 18 static features comprising geology, spacing, and operational attributes.

Our proposed framework demonstrated the ability to; 1) forecast future production with limited historical production, 2) predict production behavior under different control regimes and quantify the impact of future planned activities, 3) forecast multiple phases (oil, gas, and water) simultaneously with the same model (i.e., multi-target prediction).

 

Speaker
Danzhu Zheng
 
Southwest Petroleum University / Southwest Oil & Gas Field Company

We propose an intelligent technique in this study for predicting the productivity of shale gas horizontal wells. This intelligent technique is a combination of the artificial neural network (ANN) and particle swarm optimization (PSO). The ANN was used for non-linear relationships modelling and PSO was used for the ANN architecturetuning. Inputs of the ANN were selected to be eight geological parameters (vertical depth, pressure coefficient, porosity, TOC, total gas content, Young's modulus, Poisson's ratio, and the brittleness index) and seven engineering parameters (clusters number, displacement, flow back rate and the proportion of slick water, the proportion of quartz sand, the liquid consuming intensity and the proppant injection intensity) obtained from 317 shale gas horizontal wells of Sichuan Basin. The results showed that PSO was efficient for the ANN architecturetuning. Also, comparison of the predicted test production with actual production showed that predictive performance of the optimum ANN model was reliable (R2=0.847 on the testing set and R2=0.876 on the training set). The relative variable importance was investigated using the partial dependence plots, study indicated that liquid consuming intensity, the proportion of quartz sand, the brittleness index and the clusters number were the most four important variables for the test production prediction. This study indicates that the ANN-PSO based method is capable of providing explicit and precise forecasting of test production of shale gas horizontal wells, which can serve as a reliable tool for quick, and effective assessment of test production.

This panel discusses the nature of legacy data and addresses strategies for integrating it with new data and also analyzing it using new methods for new insights and uses.

 

Keynotes
Liz Dennett
Wood Mackenzie

TBD.

 

Keynotes
Camilo Rodriguez
IHS Markit

Machine learning is commonly used to predict future outcomes using historical data. However, deeply analyzing the results from a machine learning model can reveal previously unknown data patterns that explain historical trends. For example, what were the drivers that influenced more profitable lease transactions, what affected the cost of a well the most, did production from a well come from a better acreage location or did engineering cause the performance, and so much more. Understanding how multiple factors have contributed to a prediction will lead to better and more profitable business decisions.

 

Panel
Jeff Chambers
Mineral Answers
Hani Elshahawi
NoviDigiTech
Mark Waddleton
First Genesis
Gary Hargraves
Capgemini
Speakers
Shuvajit Bhattacharya
 
Bureau of Economic Geology UTA

Structurally-controlled hydrothermal dolomite (HTD) reservoirs are observed in many basins around the world. HTDs form adjacent to faults, which transport hydrothermal fluid from the deep subsurface. Often such hydrothermal dolomites have a good reservoir quality, compared to the predecessor limestone, which makes the dolomites as potential target for carbon storage and utilization (also known as CCUS). We present an integrated workflow for geophysical and petrophysical characterization of the hydrothermally-altered carbonates in the Trenton and Black River (TBR) formations in the Michigan Basin, United States. In our workflow, we interpret a 3D seismic dataset to analyze fault distribution and styles, and integrate it with the image log-based analysis and acoustic impedance from seismic inversion. There are several faults and natural fractures along NW-SE, punctuated by breaks forming enechelon reservoir compartments with varying fluid contacts. Faults serve as the conduit for hydrothermal fluid flow and formation of dolomite. Seismic inversion results show a decrease in impedance and increase of porosity in the dolomite sections compared to laterally extensive nonreservoir limestones. Mapping of faults and porous dolomite sections provides us with future carbon storage and utilization opportunities in the study area.

Speaker
Nishant Jha
Schlumberger

For several years, the industry has seen improved efficiencies derived from data-gathering and semi-automation. Today, digital enabled operations have proven to optimize the economics of service delivery, from the wellsite to the office. As we move into the future where electrification is prominent, a data-rich environment will enable us to use machine learning to develop the necessary insights to elevate overall production systems performance.

RIO GRANDE EXHIBIT HALL A
RIO GRANDE EXHIBIT HALL A
SABINE
 
SparkCognition
 
Geopark
 
Blue Wave
 
Amazon
 
Edwards Energy Innovation Consulting
Moderator
Josh Etkind
 
SHELL

 

Keynote
Philippe Herve
 
SparkCognition

The utilization of massive data analytics has revolutionized the way we can optimize the energy world, by detecting optimization opportunities, integrity issues, or monitoring key workflows in the oil and gas sector, in processes also applicable to other areas and industrial sectors. The Keynote will share some case studies in which Massive Data Analytics was foundational for critical CAPEX and OPEX decision-making . We will explore possible paths for the future in massive data analytics, and how the experience gained in the energy sector may serve other areas, benefitting society and the digital world.

 

Panel
Stacy Steimel
 
Geopark
Alma Del Toro
 
Blue Wave
Felipe Lopez
 
Amazon
Michael Edwards
 
Edwards Energy Innovation Consulting
Moderator
Mauricio Araya
 
TotalEnergies

 

Speaker
Matthew Reilly
 
HESS

A method is proposed to calculate pore pressure at the bit while drilling using all data typically available in a modern drilling rig. This method utilizes a machine learning approach that can estimate pore pressures with the same or greater accuracy as traditional methods and can do so at the bit in real-time.

 

Speaker
Rehan Ali Mohammed
 
 

This paper focuses on a data-driven approach to predict borehole severity issues by use of Deep Learning (DL) algorithms.

Keynotes
Sathiya Namasivayam
TGS
Igor Kuvaev
Rogii
Panel
Patrick Ng
Real Core Energy
Kristoffer Rimaila
dGB Earth Sciences
Tyler Chessman
Microsoft
Johan Daal
Luftronix

This panel addresses when, where, and why real-time data collection can be a “must-have” aspect of a business or ongoing operation. What are some of the challenges of real-time data collection? How is the data managed as it is collected and after?

SABINE
 
U TX BEG
 
New Mexico Tech
 
New Mexico Tech
Moderator
Aria Abubakar
 
Schlumberger

 

Speaker
Seyyed Hosseini
 
U TX BEG

TBD.

 

Speaker
William Ampomah
 
New Mexico Tech
Hassan Khaniani
 
New Mexico Tech

In order for commercial CCS to be practical, it is essential that the CO2 be monitored to ensure containment. This session highlights two important aspects of storage integrity monitoring for commercial CCS. The first is an exploration of caprock integrity using deep learning to reconstruct pressure fields in the above zone monitoring interval. The second explores the use of machine learning protocols to characterize subsurface stresses through integrating seismic and forward modeling methods.

Moderator
Mauricio Araya
 
TotalEnergies

 

Speaker
Patrick Ng
 
Real Core Energy
Tyler Chessman
Microsoft

Oil and gas production in the state of Texas is reported at a lease level; some leases are comprised of hundreds or more wells. To make sense of production history, we allocate production to individual wells using innovative data processing and machine learning techniques. Data is consumed through a flexible model. Production history can be queried using familiar tools at multiple levels e.g., well, lease, operator, county, reservoir, formation, basin - all the way up to the entire state of Texas. Additionally, we provide 6-month look-ahead forecasts along with rich well metadata – including location, play, lateral length, formation, and associated hydraulic fracturing information.

 

Speaker
Joshua Adler
 
Sourcenergy

Many oilfield activities are not tracked in Texas regulatory records. In this study, new machine learning technologies for analyzing satellite imagery were applied to detect every well pad constructed in the Texas Permian Basin from March 2017 through July 2020 on a five-day image cadence.

Detections were compared to RRC drilling permit filings and spud reports to quantify the time and probability relationships between these key oilfield indicators. We found that virtually all spuds (n=10,426) were associated with both an API-matched permit and a spatially matched well pad. Well pads preceded spuds by a mean 145 days and a median 39 days. Permits preceded spuds by a mean of 95 days and median of 62 days. Notably, a well pad leading to a spud was detected ahead of the drilling permit submission in 31% of cases, by a median advantage of 117 days. 79.5% of drilling permits led to a spud compared to 78% of well pad detections leading to a spud or permit. Permits collocated with a well pad led to a spud 14% more often and on median 39 days sooner than a permit without a collocated well pad. Overall, the appearance of a well pad often predicted a spud earlier than a drilling permit, and in general pads predicted spuds with similar specificity and sensitivity to permits, while drilling permits collocated with a well pad indicated a stronger intention to drill a well much sooner than a permit alone. Satellite imagery detection of well pads is clearly an important enhancement to regulatory data for energy market participants who rely on early and accurate indications of oilfield activity.

Keynotes
David Thul
Geolumina

This panel defines automatic image classification, looks at current methods used for processing data for classification and then discusses case studies and specific uses.

 

Keynotes
Deborah Sacrey
Auburn Energy
Panel
John Foster
U TX Austin
Sunil Garg
DataVedik
Lamar Landry
Maxar
Nathan Campbell
Juniper Unmanned
Bill Barna
Google

This presentation will show examples of how image processing can be used through Convolutional Neural Networks to create Aggressive and Conservative Fault Models, which can then be combined with Self-Organized Maps to create a complete picture of the sub-surface. Examples given are coming from salt domes in S. Texas and Louisiana, plus extensional faulting offshore New Zealand.

RIO GRANDE EXHIBIT HALL A

The session will consist of dynamic collaboration sessions where participants from all Tracks join each other in small groups of 8 people to discuss lessons learned and list the topics they found most interesting for solving current challenges, as well as needs for future development of the community. The attendees will then discuss the top three, and then, together, discuss guiding questions, and conduct an informal SWOT analysis (Strengths, Weaknesses, Opportunities, and Threats). A scribe will take notes and there will be a reporting out to the entire group.

The breakout groups give attendees the chance to synthesize the information they have received, and to share experiences and personal perspectives with each other.

The group will have 30 minutes of discussion, followed by a 2-5 minute presentation of the findings and suggestions per team.

Question to complete for each topic:

Challenge (can be an industry challenge, process, new technology, new technique). The following is format expected:

Strengths:

Weaknesses:

Opportunities:

Threats:

Strengths:

Weaknesses:

Opportunities:

Threats:

RIO GRANDE EXHIBIT HALL A

Tuesday, 22 February 2022

RIO GRANDE EXHIBIT HALL A
RIO GRANDE EXHIBIT HALL A
Moderator
Alireza Haghighat
 
IHS Markit

 

Speaker
Autumn Haagsma
 
Batelle

TBD.

 

Speaker
Shuvajit Bhattacharya
 
U TX BEG

The use of machine learning (ML) is well-suited for characterization, describing, and forecasting the behavior of geologic carbon dioxide storage systems where typical data analysis challenges include incomplete data, limited data, and difficult to predict characteristics. In this presentation, we will describe the application of ML for geologic data integration, identification and prediction of electrofacies, and identification and prediction of vugs and fractures for carbon dioxide storage systems.

Moderator
Wenyi Hu
 
Schlumberger

 

Speaker
Keyla Gonzalez
 
Texas A&M University

The location and movement of CO2 is a decisive factor for risk management and emissions reduction. Ongoing surveillance is required for safe long-term CO2 storage, which is a key challenge of carbon storage. A robust unsupervised learning workflow is developed to gain insights into the spatial and temporal CO2 distribution and movement. To that end, the unsupervised learning workflow processes crosswell seismic and electrical resistivity tomography measurements from a CO2 storage site. The analysis is divided into a static single time-lapse analysis and a dynamic case of 91 time-series measurements. The implemented methods exploit a multilevel and spatial-temporal clustering approach for the location and separation of CO2 plume. The validation of unsupervised learning results requires an understanding of the fluid flow mechanisms in porous media, engineering operations associated with CO2 injection, and the use of several internal clustering measures and statistical tests. For the first time, a spatiotemporal evolution of CO2 clusters is presented and evaluated to better assess the stages of CO2 migration in the subsurface. The work reveals the boundaries of CO 2 distribution and the evolutions of spatial regions and time periods over which certain distinct characteristics of CO2 plume movement were quantified.

 

Speaker
Son Phan
 
Schlumberger

Monitoring is an important task of any carbon sequestration process to ensure good trapping efficiency. This study introduces a novel deep learning algorithm to establish the nonlinear mapping between the depth domain property contrast and the time domain seismic response to CO2 injection using processed baseline data and the corresponding monitoring dataset. The trained network is able to predict the subsurface property change caused by the injected CO2 plume directly from any new monitoring dataset without conventional velocity model building and imaging process procedures. A new multi-branch design with different filtering sizes is implemented for better feature extraction off the dipping events of the seismic gathers. Also, a customed binary cross-entropy loss function is used to tackle the imbalanced training labels. We demonstrate the successful application of this deep learning algorithm on mapping the velocity contrast of CO2 plume bodies from synthetic shot gathers based on the Sleipner field datasets.

This paper addresses two kinds of sustainability: first, how data is used in conjunction with organizational sustainability goals, and second, how data itself is made sustainable, through data standards, and more.

 

Keynotes
Kim Padeletti
AWS

Do you know what value lies beneath your (sub)surface data lake? Join this talk to hear the reasons to liberate your data from silos, disparate systems, proprietary formats -- and into a modern energy data platform that can unlock millions in savings, efficiencies, and new insights.

 

Keynotes
Mason Dykstra
EnThought

The global demand for Artificial Intelligence (AI) based solutions is projected to grow from about $50B in 2020 to $350B+ in 2028. And yet, at numerous technical meetings of AAPG, SEG, EAGE, and related societies, the companies with AI ready products are few, and the operators buying off-the-shelf AI-related products are even fewer. So, are we stuck in a hype cycle, or is something else going on? We see some major factors: 1) Many operating (and service) companies have spun up groups of data scientists who are delivering bespoke AI solutions internally, but data labeling still requires scientific-domain experts, making these efforts not scalable. 2) AI solutions are not yet available as ready, off-the-shelf products complete with the domain-specific user-interfaces in order for companies to be able to easily deploy the solutions, thus complicating implementation and penetration, and 3) company leadership have not yet figured out how to effectively leverage AI/ML in a value-adding sense. These factors all point to a market in a nascent state. Both the operators and the product providers have not yet figured out how to deliver and deploy optimal solutions. We will discuss these observations, and what they mean for the evolution of AI in the Energy industry over the next few years.

 

Panel
Joseph Batir
Ph.D.
Petrolern LLC

There were significant improvements to the Geothermal map from 1992 to 2004, with an exponential step change from 2004 to 2011, leading to the “discovery” of the West Virginia thermal anomaly. This discovery was made possible by recording existing bottom hole temperature data in a different way, one more meaningful for geothermal exploration. Similar work was performed to compile the National Geothermal Data System (NGDS) for geothermal exploration, completed in 2014. More recent work shows this same reexamination of old data suggests greater geothermal resource potential in Texas and in many parts of the United States, although the full strength of the NGDS has yet to be used because the data are not in a single common format, and therefore, the NGDS has not been analyzed at a national scale for new insight. This example shows:

1) Companies can utilize existing data (theirs and public data) to search for green energy opportunities and 2) consistent data standards could facilitate and expedite new geothermal discoveries.

2) Consistent data standards could facilitate and expedite new geothermal discoveries.

 

Panel
Joseph Batir
Ph.D.
Petrolern LLC
Milly Wright
Chemostrat
Michael Edwards
Edwards Energy Innovation Consulting
Autumn Haagsma
 
Batelle
Speakers
Becky Thomas
Chief Operating Officer
i2kConnect

Learn how Upstream Operators are gaining value from unstructured content enrichment and data extraction to enrich their workflows for exploration, drilling and planning.

Speakers
Francois Laborie
President
Cognite North America

Abstract:
Robots can increase productivity and safety and remove the repetitiveness and tediousness of many tasks workers do every day in the energy industry. Yet, hardware alone isn’t enough to meet these challenges. Without the right industrial software, a robot is just another siloed system. To incorporate robots into a digital transformation strategy, industrial companies require software that provides easy access to data and the growing list of data consumers (i.e., control room operators, field workers, data scientists, solution engineers, and more) in a form they can understand and act on.

In this session, you will learn:
How pairing robots and the right industrial software can become an extension of a human that helps improve the optimization and efficiency of your operations, make it safer and more sustainable, reduce costs, and more. Benefits of building a robotics ecosystem (scaling vertically and horizontally) in your industrial operations Plus, meet Spot, the four-legged robot dog developed by Boston Dynamics, and hear how Spot + Cognite Data Fusion supports autonomous inspection of remote facilities, maintenance of digital twins, high-quality data capture, and more.

SABINE
 
Project Canary
 
IOGP
 
KMS Technologies
 
Optasense
 
Edwards Energy Innovation Consulting
Moderator
Lorena Moscardelli
Leader of the State of Texas Advanced Resource Recovery (STARR)
Bureau of Economic Geology
Keynote
Anna Scott
 
Project Canary

 

Panel
Wafik Beydoun
 
IOGP
Kurt Strack
 
KMS Technologies
Andres Chavarría
Optasense
Michael Edwards
 
Edwards Energy Innovation Consulting

Reducing methane emissions remains a critical challenge for the energy industry; continuous emissions monitoring can offer a powerful tool for producers looking to identify and reduce intermittent emissions.

Moderator
Wenyi Hu
 
Schlumberger

 

Speaker
Bernard Laugier
 
Seisnetics

Seismic data remain a pillar of subsurface modeling and the understanding of the potential for transitioning from oil and gas production to applications such as CO2 storage and geothermal projects. However, interpretation is a biased and time-consuming process forcing geoscientists to spend more energy picking horizons and building models than interpreting the significance of the results and their implications for ultimate field development, CO2storage and geothermal project evaluation.

In this paper, we detail the use of a new unsupervised Artificial Intelligence based on genetic algorithm to automatically process the seismic data in an unbiased way and record time. We applied this approach to the Groningen project (Figure 1), using data available online from the multiple seismic campaigns.

After 3 minutes of processing by the artificial intelligence, we could display all horizons on the seismic and visualize attributes for all subsurface layers, only limited by the seismic signal penetration and build all necessary geological model to localize CO2 storage areas within the Zechstein salt and the Rotliegend reservoir and evaluate geothermal projects from multiple geothermal systems. The geothermal sources are in the same reservoirs/aquifers in which the oil and gas accumulations are hosted: Cenozoic, Upper Jurassic–Lower Cretaceous, Triassic and Rotliegend reservoirs. Additionally, the yet unproven hydrocarbon plays in the Lower Carboniferous (Dinantian) Limestones delivered geothermal heat in key geothermal systems.

We successfully demonstrate the use of such AI on the Groningen case and pave the way for geoscientists to focus their attention on visualizing and interpreting the significance of the results generated by this global, fully automatic, and unbiased approach for applications such as CO2 storage and geothermal projects.

 

Speaker
Srikanth Ryali
 
Schlumberger

Oil and gas industry uses machine learning extensively to build innovative workflows and products to address a variety of scientific and technical problems. In recent times, rapid advancement in Natural Language Processing (NLP) has renewed the interest in the analysis of massive unstructured O&G document sources such as well reports, drilling reports, well log reports etc. Presently a significant opportunity exists to extract valuable information from such voluminous and diverse data sources by developing specific NLP based applications. In this paper, we discuss the research and development of four such NLP based applications which may interest the O&G community. We discuss attribute extraction, semantic information retrieval, natural language query translation and technical document classification.

Keynotes
Raoul LeBlanc
IHS Markit

This panel addresses the importance of understanding where and how your data belongs. Strategies for making sense of data will be addressed, along with factors such as domain expertise, new applications, cloud-computing, real-world cases for training, and more.

 

Keynotes
Bryan McDowell
Sabata
Panel
Scotty Salamoff
Bluware
Bilu Cherian
Premier Oilfield
Alejandro Valenciano
TGS
Isaac Aviles
Schlumberger
Mohamed Sidahmed
 
Shell

See how integrating dark datasets leads to a more holistic view of data availability and quality in the O&G business.

Speaker
Benmadi Milad
 
The University of Oklahoma

This study focuses on defining local-regional opportunities for carbon dioxide (CO₂) storage in an underground saline formation in Oklahoma. CO₂ sequestration is one of the major processes used to reduce carbon emissions intensity. This process exposes the rock to CO₂ injection into the suitable geological subsurface formation of CO₂ at which meets specific trapping mechanisms. Thus, this study characterizes the Arbuckle Group, to store CO₂, with trapping mechanisms including storage potential (porous and permeable), impermeable caprock above CO₂ reservoir, and a deeper depth. These mechanisms ensure safe and permanent storage and prevent CO₂ from re-entering the atmosphere. This paper integrates multi-scalar scientific data from core and well logs for stratigraphic and petrophysical analyses to estimate the storage capacity of the CO₂ Arbuckle reservoir in Osage County. First, Arbuckle stratigraphic thickness was determined from 124 wells. Then, lithology and electrofacies were determined from Arbuckle core, well logs, and machine learning techniques, such as principal component analysis (PCA), elbow method, and self-organizing map (SOM). Afterward, total porosity, saturation, and permeability were determined at well locations. Finally, CO₂ storage capacity was calculated volumetrically at well sites.

The presence of karst features in Arbuckle Group may provide a significant amount of porosity and permeability. Also, average porosity, permeability, and thickness for Arbuckle are 8%, 10 mD, and 2430 ft, respectively. The Woodford Shale is available in all studied wells, which may act as seal impermeable and caprock above the Arbuckle Group (CO₂ reservoir). Therefore, the Arbuckle saline aquifer in Osage County Oklahoma, could be an ideal candidate for CO₂ sequestration.

RIO GRANDE EXHIBIT HALL A
RIO GRANDE EXHIBIT HALL A
Moderator
Manoj Valluri
 
Advanced Resources International

 

Speaker
Shahab Mohaghegh
 
WVU

Artificial Intelligence & Machine Learning will significantly address Climate Change in the next several decades. This important contribution by Artificial Intelligence has much to do with its engineering application, which is based on data and avoids marketing, business, and political ideas. This session will explore common problems with modeling CO injection and how smart proxy models can abate such issues.

Moderator
Wenyi Hu
 
Schlumberger

 

Speaker
Hamed Darabi
 
Quantum Reservoir Impact

Over the recent years, we have witnessed advancements in digital transformation and advanced analytics in the oil and gas industry. Data from oil and gas operations have been key to fuel these advancements, being a channel to multiple asset workflows including evaluating drilling and completions operations. With the adoption of machine learning and artificial intelligence, the oil and gas industry is now able to apply different techniques for unstructured data retrieval and processing to gain actionable insights. This work presents a novel Deep Learning approach to process unstructured drilling and completion reports. This was achieved by correlating extracted drilling and completion keywords using a word embedding model, and numerical information in the daily reports such as depth, hole sizes, casing sizes, mud weight, weight on bit, and revolutions per minute retrieved via regular expressions. Also, a deep learning classifier was trained to label each activity description to understand drilling and completion phase, productive (P) or non-productive (NP) time, and (if applicable) the reason for non-productive time (NPT). The proposed methodology was applied on a ~100 well sample data set, comprising of ~40,000 activity descriptions and ~1,000,000 keywords/phrases. First, the numerical information (e.g., depth, hole size, casing size, mud weight, weight on bit, and revolutions per minute) was extracted from the activity descriptions. Independently, a group of drilling and completion engineers manually extracted this numerical information from a small set (~1,000) randomly drawn from all the activity descriptions. Based on the results, the proposed algorithm was able to extract numeral information with more than 95% accuracy. Also, the deep learning classifier achieved blind test accuracy of 91.3%, 92.1%, and 89.6% for the drilling/completion phase, P/NP, and NPT type, respectively. This intelligent processing of unstructured drilling and completions data allows for more than 100 times efficiency gain compared to the manual process, which when integrated with other data sources improves operational process, well design and resource allocation.

 

Speaker
Matt Fry
 
CGG

Over two million files have been collected from tens of thousands of wells drilled during decades of exploration in the Gulf of Mexico (GOM). These hold vast amounts of knowledge on the subsurface geology and Petroleum Systems but would take decades of human time to uncover. Here we have employed a range of machine learning techniques including Natural Language Processing, Image Classification and Cluster Analysis to automate the classification and extraction of geochemistry and PVT files across three protraction areas in months. As well as representing a step increase of in data volume, this approach provides a means to bring disparate sample data from untagged, legacy formats from entirely different subfolders together into a single consistent database.

Keynotes
David Tonner
DWL
Lamar Landry
Maxar

Operations that are augmented by means of automation, robotics, and other methods are discussed here. We will discuss use cases and explore where augmented operations are best applied. The future of augmented operations and automation will also be discussed.

 

Panel
Aaron Lazarus
Pioneer
Dawn Porter
Stratum Reservoir
Nathan Campbell
Juniper Unmanned
Johan Daal
Luftronix
SABINE
 
Advanced Resources International
 
Shell
Moderator
Jalal Jalali
 
Advanced Resources International

 

Speaker
Mohamed Sidahmed
 
Shell

From subsurface to surface facilities and transportation, it is one big, complex system. Each can be modeled at different timescales and resolution. Integrating all these models that capture their effects on the whole system is challenging. This session explores some of the ideas and approaches that can be considered when addressing integration and coupling.

Moderator
Wenyi Hu
 
Schlumberger

 

Speaker
Atul Laxman Katole
 
Schlumberger

A vast amount of historical well log data are inaccessible to state-of-the-art artificial intelligence and machine-learning (AI/ML) techniques because they are stored as raster images in TIFF, PDF, or JPEG format. The authors propose the creation of a fully automated deep-learning-based digitization engine to transform well log data embedded in raster images into digital data that is saved in JSON, CSV, or LAS format. A typical raster image captures the well log information in its constituents as document header, tables, plot segments, depth tracks, and log header sections. The proposed hierarchical approach trains deep-learning-based raster segmentation models to first extract the plot segments, depth track, and log headers from the raster images. Another set of deep-learning-based image segmentation models is used to extract the metadata from the log header. Subsequently, a pipeline based on a novel two-stage conditional Generative Adversarial Network (cGAN) extracts the curve pixels from the plot segments. The final digitization step transforms the extracted curves pixels to numerical logs using the extracted metadata in fully automated fashion. The curve pixel extraction achieves the normalized mean absolute error of less than 0.011 per curve in a plot segment. The manual method requires several months to digitize thousands of raster images, which now can be accomplished in a few hours with the proposed approach, achieving a more than 100 times productivity gain.

 

Speaker
Oriane Nguyen-Thuyet
 
Agiled Data Decisions

A vast amount of historical well log data are inaccessible to state-of-the-art artificial intelligence and machine-learning (AI/ML) techniques because they are stored as raster images in TIFF, PDF, or JPEG format. The authors propose the creation of a fully automated deep-learning-based digitization engine to transform well log data embedded in raster images into digital data that is saved in JSON, CSV, or LAS format. A typical raster image captures the well log information in its constituents as document header, tables, plot segments, depth tracks, and log header sections. The proposed hierarchical approach trains deep-learning-based raster segmentation models to first extract the plot segments, depth track, and log headers from the raster images. Another set of deep-learning-based image segmentation models is used to extract the metadata from the log header. Subsequently, a pipeline based on a novel two-stage conditional Generative Adversarial Network (cGAN) extracts the curve pixels from the plot segments. The final digitization step transforms the extracted curves pixels to numerical logs using the extracted metadata in fully automated fashion. The curve pixel extraction achieves the normalized mean absolute error of less than 0.011 per curve in a plot segment. The manual method requires several months to digitize thousands of raster images, which now can be accomplished in a few hours with the proposed approach, achieving a more than 100 times productivity gain.

Keynotes
Bill Barna
Google

New technologies for monitoring safety and compliance to regulations will be discussed, along with the data architecture and management strategies that must be used in conjunction with them in order to achieve the best results.

 

Keynotes
Andres Chavarría
Optasense
Panel
Ge Jin
Colorado School of Mines
Arvind Sharma
Schlumberger

 

Panel
Ge Jin
Colorado School of Mines

Distributed fiber-optic sensing (DFOS) technology can measure temperature, strain, and vibration along the same sensing cable simultaneously. This capability allows us to use the same sensing cable to achieve multiple sensing goals. The integration of various sensing results is key to improving monitoring performance and increasing the values of monitoring systems.

 

Panelist
Doug Freud
Vice President of Data Science
SAP
Mohamed Sidahmed
 
Shell

 

Speaker
Larry N. Scott
 
CS Subsea Inc.

Ultrahigh resolution (UHR) marine 3D streamer seismic data collected with a unique acquisition configuration known as P-Cable is presented and analyzed with respect to its suitability for carbon capture and storage (CCS) applications. Data examples showing the ability of this technology to image small scale faults and gas saturated formations with a clarity that rivals conventional seismic methods are presented to highlight both the advantages and limitations of this type of ultrahigh resolution, short offset, low fold data. In order to achieve this enhanced level of resolution a tremendous amount of both seismic and navigation data is collected over relatively small areas of investigation, resulting in challenges in real time coverage monitoring, data analysis, and data processing. This presentation will review some of the big data challenges we have encountered as we have worked to adapt P-Cable technology to emerging applications such as CCS.

 

Speaker
Rick Schrynemeeckers
 
Amplified Geochemical Imaging

Current offshore CO2 detection and monitoring techniques fall into two categories: geophysical methods and CO2 testing methods associated with existing wells. Geophysical methods answer very important questions, but do not answer the critical question, is CO2 actually leaking? They only highlight where CO2 may be leaking. Most offshore CO2 monitoring methods are performed at or in existing wells. So, the ability to monitor offshore CO2 is constrained by well locations. The closest CO2 monitoring well may be miles away from potential spill points. Nor do these methods address possible leakage around old plugged and abandoned wells.

A new technique is available for offshore CO2 detection and monitoring, that overcomes many of these problems. Passive hydrocarbon/CO2 sensors can now be attached to the bottom of Ocean Bottom Seismic (OBS) nodes or Autonomous Underwater Vehicles (AUVs). The weight of the OBS node or AUV is sufficient to ensure proper contact of the passive sensor with the seabed, thereby establishing effective contact with seepage. As CO2, CO2 tracers, and hydrocarbon molecules migrate upwards, they enter the passive membrane and concentrate on the specially engineered sorbents within resulting in parts per billion (ppb) level detect. The result is high resolution seismic data with CO2, CO2 tracer, and hydrocarbon data acquired simultaneously over co-located sites. The ability to overlay co-located data sets allows for the alignment of faults and/or natural fractures with CO2 leakage data with a high degree of accuracy.

The session will consist of dynamic collaboration sessions where participants from all Tracks join each other in small groups of 8 people to discuss lessons learned and list the topics they found most interesting for solving current challenges, as well as needs for future development of the community. The attendees will then discuss the top three, and then, together, discuss guiding questions, and conduct an informal SWOT analysis (Strengths, Weaknesses, Opportunities, and Threats). A scribe will take notes and there will be a reporting out to the entire group.

The breakout groups give attendees the chance to synthesize the information they have received, and to share experiences and personal perspectives with each other.

The group will have 30 minutes of discussion, followed by a 2-5 minute presentation of the findings and suggestions per team.

Question to complete for each topic:

Challenge (can be an industry challenge, process, new technology, new technique). The following is format expected:

Strengths:

Weaknesses:

Opportunities:

Threats:

Strengths:

Weaknesses:

Opportunities:

Threats:

RIO GRANDE EXHIBIT HALL A

Wednesday, 23 February 2022

RIO GRANDE EXHIBIT HALL A
RIO GRANDE EXHIBIT HALL A
Speaker
Wafik Beydoun
 
IOGP

There is little debate that the O&G Industry has embraced the Energy Transition, with several Companies and Operators setting net-zero emissions targets and pursuing low carbon projects. The Industry remains very much so co-opetitive (collaborative and competitive) during this transition, driving collaboration, innovation and continuous improvement for simplifying, standardizing, and seeking efficiencies in operations - all across its value chain. Focusing digital innovations on O&G operations (and related good practices and procedures) could be an effective strategy to demonstrate the power of digital in the midst of these profound and complex changes.

Moderator

 

Speaker
Connor Burt
 
Petrosys

Businesses frequently make strategic decisions based on their analysis of structured data. Underpinning this is a large volume of unstructured data so ultimately an understanding of both is necessary – for example, in reviewing previous decisions there needs to be knowledge of the inputs that were used in the interpretations.

To achieve this, an in-depth understanding of the project data across the G&G application portfolio is needed in the first instance. This can be achieved by using connectors to extract detailed metadata from all geoscience applications regardless of vendor.

 

Speaker
Sashi Gunturu
 
Patrabytes

The Energy Industry is going through significant transformation while managing fossil fuels and migrating to renewable energy. Energy Companies are gathering vast amounts of data from an increasing number of sources in an ever expanding number of formats. Multi-disciplinary teams work to interpret that data to understand subsurface complexities, evaluate pipeline health, determine optimal maintenance schedules to improve operational efficiency and reduce costs and improve safety. In the current price sensitive environment, asset teams are under pressure to drill faster and optimize production. The US alone has over 2.5M miles of Natural gas pipelines and over 300,000 miles of Liquid pipelines. Operators have to address pipeline integrity issues across the massive pipeline infrastructure and comply with regulatory agencies to ensure reliable hydrocarbon transmission. Refineries have to address conditional monitoring and maintenance issues. As economies target net zero emissions in the near future, digital solutions play a key role in renewable energy such as electric vehicles fleet management, battery optimization and improving the efficiency of wind turbines etc.

Keynotes
Peter Duncan
MicroSeismic
Mark Waddleton
First Genesis

This panel addresses how best to manage data for enhancing profitability and providing real-time reports of conditions and alerts that may trigger a decision-tree framework for responding. Examples of using data for enhancing profitability are presented and discussed.

Panel
Ben Burke
Transitional Energy
Ryan Jarvis
ExxonMobil
Scott Granger
Applied Petroleum Technology
Mark Waddleton
First Genesis
Speaker
Doug Freud
Vice President of Data Science
SAP

Got M(AI)L ? How to get the M(AI)L delivered and have the operational AI and ML execution be a wild success. Oh yeah, wells and THAT data and video and too much data and how can this be used to enhance sustainability ? Between Doug and Scott over a 100 years of solving data and problems, and of course they deliver the M(AI)L !

Presenter
Nathan Campbell
Juniper Unmanned

Nathan Campbell from Juniper Unmanned will take you through what it is like to collect magnetic data with a drone. This presentation will encompass the entire process from data collection to the creation and analysis of the processed data. Applications include pipeline detection, abandoned well detection, mineral exploration and fault detection.

Keynote
Josh Etkind
 
SHELL

As the oil and gas industry rises to our shared challenge to lead through the energy transition, we’ll need to develop leaders with a new integrative ‘leadership stack’ of skills and capabilities that integrates traditional physics-based engineering with systems thinking, energy system transformation, deep data science and digital product ownership, empathetic stakeholder engagement, and breakthrough creativity. How will we step boldly into these uncomfortable and novel spaces and stretch our concept of what it means to be a natural resources energy engineer or geoscientist? In this talk, the Gaia Sustainability Program will be introduced, and you will be welcomed into a vibrant global community of thought leaders and like-minded professionals. A strategic programming framework will be shared, and a deep dive into one element, “Measuring what Matters” will substantiate the approach. In this talk, you’ll be connected with useful tools and resources to support your upskilling and professional growth.

 

Speaker
Nate Suurmeyer
 
Studio X

The subsurface is ripe for disruptive technologies from data automation to augmented reality. In this session, we will discuss how AI is changing the future of work by combining the real world with the digital world, all done through crowdsourcing.

Moderator
Sunil Garg
 
DATADEVIK

 

Speaker
Sunil Garg
 
DATADEVIK

As Oil and Gas companies embark on their Digital Transformations, it is imperative that they put together a data and analytics strategy which enables and accelerates these initiatives. An effective data and analytics strategy would include the various steps in the life-cycle of data - data acquisition, quality Control and improvements, transformations, storage, retention policies, governance, access control, visualization, machine learning and availability to workflows facilitated by semi or fully automated data pipelines with the goal of making faster and reliable decisions. This session will include a presentation on the concepts of effective data and analytics strategies followed by some practical applications of Machine Learning in Oil and Gas Domain.

 

Speaker
Vinicius Martins Botelho
 
Petrobras

The Brazil’s National Petroleum Agency regulations obliges any company that wishes to drill for oil within the nation’s borders to regularly send geological data about its subsurface. One data type that it is required to be sent is rock thin sections photos. A thin section of rock is the result of a thicker portion of rock being ground up and polished so it can be studied and analyzed. During a typical exploration campaign, several thousand rock thin sections are generated, photographed and analyzed. Although they are stored in an electronic document management system, there are several manual steps in order to do so. This work adapted a convolutional neural network from a well-known image classification problem to assess the compliance of over 24,000 photos sent semesterly to this national agency.

Are there times when technology does not work because the data is incompatible? Does data cleaning and harmonization take more time than the analysis itself? This panel identifies the reasons why it is important to establish standards for data management, and who decides what the standards are and how they are implemented.

 

Keynotes
Ali Hussein
SkyGrid | SparkCognition
Sashi Gunturu
Petrabytes

Maintaining unmanned aircrafts safe and our society at large safe from the unmanned vehicles requires an end-to-end airspace management system that uses AI and blockchain to intelligently route, synchronize, identifies unforeseen trends in airspace, vehicle, and environment data. A modern solution enables the monitoring, prediction, and adaption to changing conditions. Fully automated missions are now possible to enable perimeter security, maintenance inspections, search and rescue operations, and leak detections at any oil and gas or energy facility.

 

Keynotes
Sashi Gunturu
 
Patrabytes

As Energy Companies transition to clean energy solutions, this talk discusses enhanced data warehouse architecture - "Lakehouse" that utilizes cloud and distributed computing for Data and AI solutions.

Panel
Julian Chenin
Bluware
David Leary
DeepTime Digital Earth
Benin Chelinsky
IHS Markit
Zhiyong He
Zetaware
Felipe Lopez
 
Amazon
Speaker
Marc Gallant
 
CANVASS AI

Digital transformation of the oil and gas industry can potentially unlock around $1 trillion of value for the industry. But of the 36% of oil and gas companies that have already invested in big data and analytics, only 13% use the insights from this technology. Oil and gas companies don’t need to fall into this trap of lost value and instead apply the best practices that can ensure their AI aspirations turn into reality.

Moderator

 

Speaker
Deborah Sacrey
 
Auburn Energy

In 2019, a comprehensive machine learning study to determine reservoir characteristics was applied to a then recently discovered field in northwestern Colombia, SA. It was determined that the key wells drilled in the field were finding a compartmentalized, debris flow reservoir. A new well was drilled in 2020 some distance away from the original field and found a similar reservoir situation. However, upon testing, the probable pay zones in the well quickly depleted, even though mapping within the 3D showed considerable areal extent. The client once again asked for help using Machine Learning techniques to determine the cause of the depletion. Several methods of machine learning and petrophysical analyses were employed to provide an answer to the problem.

RIO GRANDE EXHIBIT HALL A
RIO GRANDE EXHIBIT HALL A

The session will consist of dynamic collaboration sessions where participants from all Tracks join each other in small groups of 8 people to discuss lessons learned and list the topics they found most interesting for solving current challenges, as well as needs for future development of the community. The attendees will then discuss the top three, and then, together, discuss guiding questions, and conduct an informal SWOT analysis (Strengths, Weaknesses, Opportunities, and Threats). A scribe will take notes and there will be a reporting out to the entire group.

The breakout groups give attendees the chance to synthesize the information they have received, and to share experiences and personal perspectives with each other.

The group will have 30 minutes of discussion, followed by a 2-5 minute presentation of the findings and suggestions per team.

Question to complete for each topic:

Challenge (can be an industry challenge, process, new technology, new technique). The following is format expected:

Strengths:

Weaknesses:

Opportunities:

Threats:

Strengths:

Weaknesses:

Opportunities:

Threats:

* NOTE: All times are CT

Want to stay informed?

Enter your email and we'll keep you in the loop!

Powered by:

AAPG Logo
SPE Logo
SEG Logo
Operated by SEG

This website uses cookies to improve your experience. If you continue without changing your settings, you consent to our use of cookies in accordance with our cookie policy. You can disable cookies at any time. Learn More