The 21st PNEC conference in Houston, Texas - the International Conference and Exhibition on Petroleum Data Integration and Information Management - has just wrapped up.
As ever, it felt part-conference, part-family reunion - petroleum data management is a small and specialised discipline. A few-hundred of us keep meeting, year after year, to share our learnings and stories, successes and failures, in the hope that we can improve data management practices in oil and gas.
No pain, no gain
Oil and gas data is not easy to work with. Compared to our industry’s data management, only Life Sciences offers an equal number of problems opportunities. Massive data volumes; data kept (and remaining relevant and important) for generation after human generation; measurement data that’s only useful when coupled with the all-important units of measure and quality indicators, and can only be trusted when all conversions and transformations are understood and verifiable. And it doesn’t help that (as Peter Black stated, succinctly) “all oil and gas software is crap”.
The technical and scientific software market is too niche to spark the kind of revolution you see in modern software tools like Slack, Salesforce, and IFTTT. Which means those who don’t want to write their own applications are held to ransom by a few large oil service companies concerned only with retaining market share at the lowest cost to themselves. We’re still relying on 10- or even 20-year-old software development techniques – and 10- or even 20-year-old software products in fact, because there’s not a lot of new software product (or SaaS) in this space.
Some things are changing
Cloud is finally being embraced. It seems the oil industry has learned that the cloud is probably more secure than its own data centres. Or maybe it’s just that with oil at $40 a barrel, they don’t care so much about keeping this part of the business in-house. Data lakes are being discussed as an alternative to offsite tape archive. Agile development techniques are creeping in.
Analytics featured more prominently in presentations this year. Oil companies are reducing their reliance on poor software products and starting to work the data themselves. The sustained low oil price has helped I’m sure, but big data technologies are maturing to the point where they can deliver on their promise, too. And open source helps - we can find open source code to manipulate common industry file formats, and numerical analysis methods are readily available in NumPy, SciPy, and other standard packages.