So, we are moving the office to Boston and have been incredibly busy the past few weeks – and will probably continue to be busy for a few more. I have been remiss in posting, so I am passing along this Python script to help you calculate all your cumulative production numbers over time periods you specify. The coding is very straight forward and it also shows you a way to reshape a data frame and rename columns. The code, as always, is below and in our Github repository HERE.
Where can you use this?
Calculating cumulative production is a great way to check/back up your decline forecast results, but in our group we mostly use it for machine learning algorithms related to figuring out EURs, predicting production histories, and diagnosing potential production problems.
Some Sundry Data About the Permian in NM
We have been working quite a bit with New Mexico data and thought we would share some statistics with our readers. One piece of code we don’t share is our decline forecasting algorithm. Like every other analytics group out there, ours is the best you will find. Along with that, to answer a common question, “No, you don’t need to use machine learning to build your own.” If you are experienced and have a set procedure that you use, you can build those rules into a function – probably a very large function. The Numpy Python library offers a good deal of data fitting options, and if combined with good logic and some ingenuity using other freely available libraries, you can create some incredibly powerful software.
The hexbin graphs below show the distributions of oil, gas, and water declines vs. b-values by formation in Eddy and Lea counties, NM. The well set reflects horizontals drilled in the last 5 years with at least 12 months of consistent production data. In terms of b-values, we try to stay in line with optimistic auditor calls and max out at 1.6.
This will be the last “free data” post for a while as we are sure everyone who frequents our blog wants to see other uses for Python in the oil and gas industry – we may even do other sectors coming up, so keep checking back. We forgot to include Wyoming in our previous post regarding where to find cheap public oil and gas data sets for the U.S. – and we think it would be remiss not to include the great state of Wyoming on our list considering how much free, high quality data they offer on their site. So, lets get started…
Where to Get It and What They Have
The main page for the Wyoming data sources is HERE. You can see they are definitely not stingy with the data sharing. Outside of the production and header data we will show you how to pull and reshape, they give you access to an incredibly large amount of information for a state website. One link in particular stands out to us for future analysis and that is their gas plant data. It looks like it is only in aggregate at the state level, but they drill down and provide you with links to their individual plant data (in Excel format). Incredibly helpful and insightful.
For the data we will be looking at, go HERE. It looks like a site address that may change from computer to computer, so you can also get to the same thing we are looking at by following the “download” link on that initial list page we linked to at the beginning of this section. Click it and you will come to this page:
You should separately select the two items in the menu we have grayed out on the image. Select one, click the cowboy to the left of the menu, then do the same for the other file. Once you download those two zipped files and perform the extraction, you will have 2 DBF files (well header data for wells that have been permanently P&A’d along with the same data for all other well statuses) and 4 Excel files (monthly production broken out by groups of counties).
Basic Process to Reshape the files
This will be a very short post because, as usual, we provide you the code below and on our GitHub repository – and that code is very well detailed as to what is happening at each step. The one thing we do want to mention is that along with the standard data that comes in the well header data, we also show a way to get a good estimate on lateral lengths using geopy and the lat/longs for surface and bottom hole locations. If you follow how we have it set up, you can apply this to every other data set you may have where you have the lat/longs, but not necessarily the actual lateral lengths for horizontal wells.
We did a little cleaning on this, but, like everything else you find on the internet, you will need to do some editing yourself. You probably have a much smaller area of interest that you will want to dig down into and make sure is as clean as possible for whatever project you are working on. Though, in the meantime, you now have a way to get a large amount of what you would need to do evaluations on wells in Wyoming – ~164,000 wells and 17 million+ lines of production values.
So, you downloaded the FracFocus database – now what the hell do you do with it? In this post we will show you where to get it, how to load it , clue you into some general issues with working in it, show you a Python program that cleans it and gets the pertinent information, and then we will close with showing you some insights from it via Tableau. We won’t lie, we were going to show you how to build some prettier graphs and maps in Python, but my Geopandas library is acting up because I decided to update a library which will require some other updated libraries.
Where Do You Get It and How Do You Load It?
You can pick up the database HERE. They give you a query to connect all the tables in Microsoft SQL Server Management Studio on this page, as well. Note: we aren’t going to do anything with MS products minus load the FracFocus db into it. Though, you will use that query they give you in the Python program. Also, if you don’t have Management Studio, or other software that reads MS databases, it is easy to find an installation and instructions for setting it up.
To load it: select Databases in the Object Explorer, right click it, and select Restore Database. Select the Device radio button on the Restore database screen and navigate to where you have put the FracFocus.bak file and select OK. It will load the database and you are done with SSMS.
Loading the Database Into Python and Viewing the Data
We are posting the whole program below and on our GitHub repository. So, to get a better idea of what is going on, see the program. You will need the pyodbc library so the script can connect with MS SQL directly and pull the data from the database. The most challenging part of getting this to work is making sure you have the correct driver. You can see how we used the query they provide and give you a note on how to change the query to only view certain states and counties if you don’t need the whole database. One issue that you will come across after you load it is the fact that multiple columns have the same names and it makes it difficult to reference a certain version of the column you will want. We have included a function to rename those duplicated columns so you can erase them or use them as you see fit.
To put it succinctly: It is absolutely terrible. This is one of those data sources that if you want to practice your data cleaning skills, this is a great opportunity. We think we have given you a good 80% start on the task, but you can spend a lot more time going through this with a fine tooth comb. Perhaps you can get a few thousand more wells cleaned up to give you a better data pool. Going through the mammoth list of ingredients and purposes, you find out that there is, indeed, 200 different ways to spell “naphthalene” or “ethylenediamine triacetic acid”. You also get an idea of the attitude of the people entering the data. For example:
“Aquafina”: We really don’t care how we spend our sponsor’s money…
“Dihydrogen Monoxide”: We just want to poison the Earth.
“Essential Oils”: Optimizes a frac job while aligning your chakras.
“Pee”: We are straight up honest.
“Contains hazardous substances in high concentrations”: Their lack of the word “No” has unknowingly given them a very “Come at me, bro” attitude toward the EPA.
“Contains no hazardous substances” all the way down that well’s ingredient list: Reminds me of this clip from Super Troopers…”Don’t worry about that little guy.”
One thing you can do to help out any searching you will do is eliminate \t (tab) and \n (newline) tags. Also, converting everything to upper case can make searches or other cleaning methods a little easier. And, of course, the universal issue when talking about proportions of anything – percents that are represented as whole numbers and decimals. You will most definitely need to find which ones are which and standardize them.
Break Down Between Fluid and Proppant
We have run the cleaning on the database two different ways. Use the total fluid value for each well and subtract that from 100, with the remaining percentage being proppant, or the reverse of that. We use the reverse; 100% – proppant percentage as we have had better results. It is up to you, but the code provided calculates percentages our way. The one thing we would change, if you plan on using this, is replace our proppant filter list with a text search using regex (regular expressions) to streamline this. We use a list to filter proppants here because it was easier for us – we keep that list along with various categories of fluids, citric acid use (for Permian well studies), and other sundry ingredients for analysis. There is no better time saver than cut and paste.
Post Cleaning and Results
After you have cleaned everything to your desired level, and have eliminated outliers using statistical methods, you have a relatively decent oil and gas data set. In a later post, we will show you how to use geopandas to plot maps in python and some more presentation worthy visualizations, but for now I am sure you are fine with seeing this in Tableau format on the Tableau Public site. If you have never used it, as long as you share the data, you can build workbooks for free.
For the full Tableau workbook in full screen mode, go HERE.