Article

Unchartered Waters: Machine Learning in Geoscience

Duncan discusses the promise of machine learning in the field of geoscience.

Dr. Duncan Irving
Dr. Duncan Irving
13 juillet 2017 5 min de lecture

I’m still riding the tide of new information that swept over me at this year’s EAGE (European Association of Geoscientists and Engineers) conference in Paris last month. This wave of change was obvious for those that looked, but many still didn’t see it for what it is: the future. I’m talking about machine learning (ML) and all those other very contemporary techniques that sit in the nebulous world of computing unicorns and rainbows, like AI and deep learning.

Three motley crews...

When it comes to machine learning, I found three distinct crews at EAGE: the actively engaged, the sceptics, and the ambivalent. Those actively engaging in ML are exactly the kinds of (relatively) early adopters that participated in the excellent Subsurface Hackathon, hosted by Total the weekend before EAGE. Vocal sceptics of these new techniques, I would argue, are resistant because of the perceived disruption to software sales, rather than an active aversion to the techniques themselves. Those who are plain ambivalent about machine learning are the same people ignoring the data sea change: carrying on as they were until somebody hands over a better widget.

Within this last group I include EAGE themselves; their long-standing inertia to this area finally gave way to a workshop and a couple of sessions on machine learning this year.

Yet, with still a chunk of the expo floor and technical sessions given over to high-performance computing (HPC) drag racing and algorithm juicing for the inversion modellers, isn’t it time to embrace the new ML computing paradigm too? Maybe next year ...

Less is more

Why should we care? Well, in 2017 the oil industry is all about doing more with less. Oil employs some very smart people, but in a lot of subsurface workflows, we’re asking them to use their faculties and judgement to answer some very basic questions: “How similar is this thing to this thing?” (correlation between wells), and “where have I seen this before?” (rock characteristic — or facies — associations). The first is a statistics approach, the latter a data mining question. To my mind, the data mining question is the most economically significant.

"Machine learning is not ‘science-y’ enough"

But first, the similarity thing. My colleague Jane has just blogged about the potential for — and the limitations of — machine learning in the subsurface world. The technique is still very much in its infancy, because there is a lot of resistance to the approach: Machine learning is not ‘science-y’ enough. The demographics of the domain data (i.e. its statistical texture) is poorly understood, and the data is typically either locked away in application projects or of dubious quality. All of this impedes training machine learning algorithms. We are currently incapable of training our ML algorithms to the extent that Google et al. have. They can identify cats, cars, trees, people (so self-driving cars don’t hit them), and have a good understanding of how to assemble the components of their ML capabilities.

To achieve the scale of data demanded by machine learning to succeed, we require leadership.

Hell or high water?

However, if you look at the reports from the hackathon, you’ll see that a team of near-strangers trained a machine learning algorithm to identify and classify direct reflections and multiples in synthetic shot gathers in less than a weekend. They did all of that using open-source tools.

This exemplifies the work those using machine learning techniques on seismic and petrophysical data to answer, “How similar is this thing to this thing?” without actually having to do something as mundane as clicking a mouse
button hundreds of times to digitise geological features.

We were training algorithms to extract features for us, because we (i.e. the data scientists) could assemble the low-level parts ourselves that let the computer do the boring stuff.

Sharing is caring ...

Now here’s the catch: When you have a lot (a lot!) of data, you have a statistically reliable data set. This is where the oil industry has the opportunity to turn the tide. If we are to use machine learning as a tool in our industry, then we need to learn what approaches work best and train our data on basin-scale data sets.

Even better (to answer the second question of “Where have I seen it before?”) would be to train on data sets from many basins. This is a commercial and political challenge, rather than a technical one. Only the largest operators have anything approaching the volumes of seismic data and well logs that could make true machine learning a possibility. Some of the national repositories (UK Oil and Gas Authority/CDA, Norwegian Petroleum Directorate) possess large and diverse data sets that could serve this purpose on an industrial scale.

How can this become a viable business model? I’m not for a minute suggesting that all of the high-cost, high-value offshore data is provisioned for free and open use. What I am suggesting is that the industry does what it’s good at — playing the cooperation/competition game.

"To achieve the scale of data demanded by machine learning to succeed, we require leadership"

The tide of data will lift all ships, and the bigger industry players should cooperate by creating massive training datasets behind a firewall, letting people work on it — either to develop the tools that could later become competitive enablers or for immediate business insight through prospecting and licensing. This could even be monetised by government agencies if they charged by the CPU clock cycle: entirely viable on public cloud infrastructure today. To achieve the scale of data demanded by machine learning to succeed, we require leadership.

The UK Oil and Gas Authority just announced free access to a significant amount of offshore data, which sets the bar high, and I look forward to seeing what learnings — both technical and economic — will arise.

On that second question of “Where have I seen this before?” — once we’ve worked out how to use machine learning to quantify what “this” is when it comes to seismic facies, well log properties, etc., the pressing question in the North Sea sector becomes that of understanding where bypassed pay exists based on an ML view of what a producing formation looks like, and feed this into the UK Continental Shelf decommissioning strategy whilst we still have the infrastructure to exploit reserves.

Uncharted-waters-DI-Bottom-image-1.jpg
Ageing infrastructure: We will soon lose the means to extract North Sea resources

It is clear that as an industry we have the data, the people, and the tools. The issue we have is that we are still treading water when it comes to putting our resources to use in an effective way.

Tags

À propos de Dr. Duncan Irving

Dr. Duncan Irving is the Practice Partner for Oil and Gas, Analytics Business Consulting at Teradata International. Duncan co-ordinates Teradata’s oil and gas programme around upstream and midstream big data analytics and data science engagements in EMEA and APAC, cross-fertilizing with other science and engineering driven domains in the manufacturing industry verticals.

In his current role, Duncan drives value and impact in the oil and gas domain by bringing together data and analytical business silos. Duncan’s role is also based on providing petro-technical domain insight to client projects and internal developments.  Duncan drives analytical business consulting at the industrial and client levels, informing strategic messaging and engagements with key clients and ensuring that upstream subject matter expertise is applied in all aspects of an engagement from pre-sales, to sprint delivery.

With a background in academia, Duncan was a geophysicist and Head of Computing at the University of Manchester before joining Teradata in 2010. Much of Duncan’s seven years at Teradata has been spent working across client projects in reservoir monitoring, monitoring development, working across projects in drilling, production/operations and refining. Duncan also engages across the oil and gas partner landscape with service companies, as well as consultants to ensure the best team is assembled for a client engagement.

Duncan has also worked on a variety of projects, ranging from seismic to SAP, drilling data, data mining and most petro-technical domains and technologies.  He also has experience working across other manufacturing verticals where Industrial Analytics is more mature bringing useful concepts back in to Oil and Gas.

Duncan holds a PhD in geotechnical and geophysical modelling of permafrost deformation from Cardiff University and a BSc in geophysics from the University of Edinburgh. Duncan often speaks at industry events and in his spare time he enjoys running, climbing, skiing and cycling with friends and family.

Voir tous les articles par Dr. Duncan Irving

Restez au courant

Abonnez-vous au blog de Teradata pour recevoir des informations hebdomadaires



J'accepte que Teradata Corporation, hébergeur de ce site, m'envoie occasionnellement des communications marketing Teradata par e-mail sur lesquelles figurent des informations relatives à ses produits, des analyses de données et des invitations à des événements et webinaires. J'ai pris connaissance du fait que je peux me désabonner à tout moment en suivant le lien de désabonnement présent au bas des e-mails que je reçois.

Votre confidentialité est importante. Vos informations personnelles seront collectées, stockées et traitées conformément à la politique de confidentialité globale de Teradata.