Computers have become incredibly good at analyzing and drawing conclusions from huge volumes of data. This trend shows up in many aspects of our daily lives. Computers suggest whether we should buy an airfare or wait for prices to go down, predict the best day for a weekend volleyball game, and enable real-time navigation to get us to our destination most quickly. Generally speaking, we welcome these improvements. Yet in the world of oil and gas exploration and production, many of us balk at the staggering advances in machine learning in oil and gas. Why?
Remember the days of picking tops on paper well logs? (If your company was really high-tech, you might have had a magnetic wall where you could could hang several logs in a stratigraphic cross section.) Whether your pick was a relatively easy one like the Middle Bakken, or something a little trickier like the Top Codell, you had to physically handle every well log and post those values on your base map before then contouring them by hand. Was this process fast? No, certainly not by today’s standards. Was it fun? Generally, yes. An afternoon shuffling old logs, occasionally finding a mysterious or illuminating annotation left by a previous geologist, slowly building a mental geological model of the area under study – what joy! Was it easy? Not always, but…sometimes, yes it was.
Embracing the use of modern software applications for picking and mapping geologic tops, as nearly all geologists already have, necessarily eliminates those pleasurable old tasks from our workflow. Similarly, geophysicists reminisce about the days of interpreting seismic sections with colored pencils. Engineers bristle when asked why they still want to curve-match production data by hand. And so on.
To quote a favorite Disney princess heroine: let it go. The future of our industry, the future of your company – and the future of your job – almost certainly lie in the general direction of automation. And of course, it’s not just oil and gas that is being revolutionized. A quote from the fascinating New York Times Magazine article “The Great A.I. Awakening” illustrates these points: “Medical diagnosis is one field most immediately, and perhaps unpredictably, threatened by machine learning. Radiologists are extensively trained and extremely well paid, and we think of their skill as one of professional insight — the highest register of thought. In the past year alone, researchers have shown not only that neural networks can find tumors in medical images much earlier than their human counterparts but also that machines can even make such diagnoses from the texts of pathology reports. What radiologists do turns out to be something much closer to predictive pattern-matching than logical analysis. They’re not telling you what caused the cancer; they’re just telling you it’s there.”
Wow. Did you see yourself in there? Reworded ever so slightly: What geoscientists and engineers do turns out to be something much closer to predictive pattern-matching than logical analysis.
These days, we are well beyond simple computer mapping. Let me share a few interpretation examples using Drillinginfo’s DI Transform software.
First, an auto-picked geologic top – in this case Top Strawn picked over an area of the Midland Basin just east of Midland itself (below). In this example, given a handful of manual picks, the software picked the remainder of the tops for nearly 200 wells in a matter of seconds and immediately mapped them. Do the auto-picks need some refinement or other QC? Yes, probably. There certainly appear to be a few “bullseyes” on the map that need some investigation. But the point is that this first pass of picks and mapping was done, again, in seconds – not hours or even days, as manual picking would require – allowing the interpreter to focus his or her valuable time on higher-level work.
Second, a Jurassic seismic horizon from the Dutch Central Graben (North Sea) that has been auto-tracked based on a few manual “seed” points. Seismic auto-tracking has been around for at least a couple of decades, but the latest generation of algorithms yield far superior results, especially across faults and in other relatively structurally- and even stratigraphically-complex areas. As always, there remains a need for QC and refinement of the auto-tracked horizons (see comment on structural models, below), but just as with the auto-picked geologic tops in the first example, the time saved on what would be relatively rote interpretation is the advantage here.
Third, the same Dutch Central Graben seismic volume on which we’ve performed automated fault extraction (below). This workflow, including setting up and tweaking parameters for optimal results as well as the actual fault extraction, took on the order of 30 minutes. Automated interpretation of faults was slower to develop than for seismic horizons, but recent advances in probability-based approaches have fully commercialized the technology. (DI Transform’s implementation uses fault thinning and quad mesh code recently developed at Colorado School of Mines.) How many of you remember the arduous process of hand-picking faults on a grid of inlines and crosslines and carefully drawing the fault polygons on a basemap? Anywhere with decent seismic data, those days are gone.
For those of us who aren’t yet satisfied with the “patchy” appearance of the auto-tracked seismic horizon or the extracted faults in the examples above, conversion of horizons and faults to a fully sealed structural model solves that problem (below).
In addition to the guilty pleasure and ease of doing things the way we’ve always done them, another reason geoscientists hesitate to let the machine handle too much of the work is that they don’t fully understand what it is doing, and thus they can’t trust it, or perhaps more accurately, they’re not sure whether they can trust it. This reaction is much more defensible, as any good scientist should indeed be wary of a “black box” – a system in which the relationship of inputs and outputs is at best not clear and at worst nonsensical.
As is the case with most things we initially fear or mistrust, the more we learn about the subject, the less frightening it becomes. A timely example of this in our industry is multivariate statistics. If you aren’t learning about it already, you need to be. With almost unlimited applications, multivariate modeling is especially important in developing resource plays, where you will be given no quarter if you are still manually trying to figure out what makes for the best wells. Whether your goal is to maximize EUR, IP30, or NPV, you need to use the power of non-linear, multivariate statistical analysis. Otherwise, you are robbing your company – and the world – of valuable resources. There are simply too many input variables for you to sift through on your own.
DI Transform provides an industry-leading set of tools that allow for user-directed extraction of geophysical and geological data, import of engineering data (think completions…), robust data QC and prep, and powerful model-building. Ultimately, this workflow identifies the most important variables in any system, quantifies each one’s contribution in concert with other variables, and predicts likely outcomes given any given set of inputs or changes in input. At the end of the day, the machine tells you what it takes to drill the best well, or how to solve any other complex problem you throw at it. (In fact, one of our developers recently used the program to build a model predicting real estate values to help gauge offer prices for his house-hunting daughter!)
A complete discussion of this workflow is beyond the scope of this post, but let’s at least look at some illustrations. First, below, we have a generic view of some of DI Transform’s data analysis tools, including a histogram, crossplot, correlation table, and pie chart – all used to visualize and carefully QC the data.
Once we’ve made sure that our data is in good shape (a vitally important, but often underappreciated, step), we can move on to build a predictive model using the non-linear multivariate relationships as quantified by the machine. Below is an example from the Midland Basin, in which six variables – reservoir depth, thickness, proppant per foot, GOR, lateral length, and porosity – were used to build a model predicting Wolfcamp B six-month cumulative oil production. Each variable’s impact on the solution is shown below, with positive contributions hand-annotated and highlighted with the green bars. Again, there’s much, much more to this discussion than we can tackle here, but hopefully these illustrations 1) give a hint of the power that multivariate analysis has to offer, and even more importantly 2) take a bit of the mystery out of the term.
In the end, there are many ways to look at all this. The most starkly realistic is that the ascent of machine learning in oil and gas companies is upon us, and you’d better just get used to it. A more empowering approach might be to realize that smart machines can do things that you can conceive of but simply can’t do, or that would take you many times longer. This saves you from routine and time-consuming tasks and frees you to perform higher-value work at which humans still excel, such as ensuring that first principles of geology or geophysics are honored in a machine interpretation, or crafting exploration strategy. Either way you choose to react, the facts lay the issue bare: machines make your company more efficient and our industry more viable. In a world of $45 oil, we ought to be thanking our lucky stars – or our machines – for that.
Latest posts by Patrick Rutty (see all)
- Woodside, Aker BP, CNRL and Suncor: Hungry for More in 2020 - October 7, 2020
- Tullow — Torn to Pieces or Swallowed Whole? - September 8, 2020
- Sharks in the Water—Part 3 - June 9, 2020