Short Script for Cumulative Production Calculations

One minute explanation of this post…

So, we are moving the office to Boston and have been incredibly busy the past few weeks – and will probably continue to be busy for a few more. I have been remiss in posting, so I am passing along this Python script to help you calculate all your cumulative production numbers over time periods you specify. The coding is very straight forward and it also shows you a way to reshape a data frame and rename columns. The code, as always, is below and in our Github repository HERE.

Where can you use this?

Calculating cumulative production is a great way to check/back up your decline forecast results, but in our group we mostly use it for machine learning algorithms related to figuring out EURs, predicting production histories, and diagnosing potential production problems.

Some Sundry Data About the Permian in NM

We have been working quite a bit with New Mexico data and thought we would share some statistics with our readers. One piece of code we don’t share is our decline forecasting algorithm. Like every other analytics group out there, ours is the best you will find. Along with that, to answer a common question, “No, you don’t need to use machine learning to build your own.” If you are experienced and have a set procedure that you use, you can build those rules into a function – probably a very large function. The Numpy Python library offers a good deal of data fitting options, and if combined with good logic and some ingenuity using other freely available libraries, you can create some incredibly powerful software.

The hexbin graphs below show the distributions of oil, gas, and water declines vs. b-values by formation in Eddy and Lea counties, NM. The well set reflects horizontals drilled in the last 5 years with at least 12 months of consistent production data. In terms of b-values, we try to stay in line with optimistic auditor calls and max out at 1.6.

Code for Cumulative Production Script

Pulling, Cleaning, and Using Data from FracFocus

So, you downloaded the FracFocus database – now what the hell do you do with it? In this post we will show you where to get it, how to load it , clue you into some general issues with working in it, show you a Python program that cleans it and gets the pertinent information, and then we will close with showing you some insights from it via Tableau. We won’t lie, we were going to show you how to build some prettier graphs and maps in Python, but my Geopandas library is acting up because I decided to update a library which will require some other updated libraries.

Where Do You Get It and How Do You Load It?

You can pick up the database HERE. They give you a query to connect all the tables in Microsoft SQL Server Management Studio on this page, as well. Note: we aren’t going to do anything with MS products minus load the FracFocus db into it. Though, you will use that query they give you in the Python program. Also, if you don’t have Management Studio, or other software that reads MS databases, it is easy to find an installation and instructions for setting it up.

To load it: select Databases in the Object Explorer, right click it, and select Restore Database. Select the Device radio button on the Restore database screen and navigate to where you have put the FracFocus.bak file and select OK. It will load the database and you are done with SSMS.

Loading the Database Into Python and Viewing the Data

We are posting the whole program below and on our GitHub repository. So, to get a better idea of what is going on, see the program. You will need the pyodbc library so the script can connect with MS SQL directly and pull the data from the database. The most challenging part of getting this to work is making sure you have the correct driver. You can see how we used the query they provide and give you a note on how to change the query to only view certain states and counties if you don’t need the whole database. One issue that you will come across after you load it is the fact that multiple columns have the same names and it makes it difficult to reference a certain version of the column you will want. We have included a function to rename those duplicated columns so you can erase them or use them as you see fit.

Data Quality

To put it succinctly: It is absolutely terrible. This is one of those data sources that if you want to practice your data cleaning skills, this is a great opportunity. We think we have given you a good 80% start on the task, but you can spend a lot more time going through this with a fine tooth comb. Perhaps you can get a few thousand more wells cleaned up to give you a better data pool. Going through the mammoth list of ingredients and purposes, you find out that there is, indeed, 200 different ways to spell “naphthalene” or “ethylenediamine triacetic acid”. You also get an idea of the attitude of the people entering the data. For example:

“Aquafina”: We really don’t care how we spend our sponsor’s money…

“Dihydrogen Monoxide”: We just want to poison the Earth.

“Essential Oils”: Optimizes a frac job while aligning your chakras.

“Pee”: We are straight up honest.

“Contains hazardous substances in high concentrations”: Their lack of the word “No” has unknowingly given them a very “Come at me, bro” attitude toward the EPA.

“Contains no hazardous substances” all the way down that well’s ingredient list: Reminds me of this clip from Super Troopers…”Don’t worry about that little guy.”

One thing you can do to help out any searching you will do is eliminate \t (tab) and \n (newline) tags. Also, converting everything to upper case can make searches or other cleaning methods a little easier. And, of course, the universal issue when talking about proportions of anything – percents that are represented as whole numbers and decimals. You will most definitely need to find which ones are which and standardize them.

Break Down Between Fluid and Proppant

We have run the cleaning on the database two different ways. Use the total fluid value for each well and subtract that from 100, with the remaining percentage being proppant, or the reverse of that. We use the reverse; 100% – proppant percentage as we have had better results. It is up to you, but the code provided calculates percentages our way. The one thing we would change, if you plan on using this, is replace our proppant filter list with a text search using regex (regular expressions) to streamline this. We use a list to filter proppants here because it was easier for us – we keep that list along with various categories of fluids, citric acid use (for Permian well studies), and other sundry ingredients for analysis. There is no better time saver than cut and paste.

Post Cleaning and Results

After you have cleaned everything to your desired level, and have eliminated outliers using statistical methods, you have a relatively decent oil and gas data set. In a later post, we will show you how to use geopandas to plot maps in python and some more presentation worthy visualizations, but for now I am sure you are fine with seeing this in Tableau format on the Tableau Public site. If you have never used it, as long as you share the data, you can build workbooks for free.

Tableau Results

For the full Tableau workbook in full screen mode, go HERE.

Python Program Code

For the full Python program code, go HERE.

A Script for Pulling Oil and Gas Prices

This small price fetching script comes from a much larger program in Faro’s library which allows a company to get a total view of a basin’s metrics in an incredibly short time frame.  The program will:

  1. Clean all the production, geologic, and sundry data that it is fed.
  2. Create other metrics that can be calculated from government / state data, the FracFocus database, and other sources.
  3. Normalize and high-point align production data for all wells.
  4. Use our proprietary curve fitting algorithm to automate fitting all production streams for wells over 12 months old.
  5. Utilizing the metrics from those forecasts, along with the general well data, use machine learning algorithms to properly classify the younger vintage wells and predict type curve areas for new locations.
  6. Use best estimate capital costs and lease operating expenses to forecast all well economics – and if the data is available, perform look backs.
  7. Generate geologic, production, EUR, and IRR maps (in both Python and other mapping software).

About the Program and Price Sites

Before you use this script, you will have to install a webdriver for the web browser you prefer (if you don’t already have the driver).  In this case, we use Firefox.  Chrome is great, but when it comes to looking at what is going on in the background of a website we prefer Firefox’s tools.  So, to get GeckoDriver for Firefox, go to this page.

Typically, when you are looking at a site from which to pull data, you can get an idea of the table format by just right clicking anywhere on the page and choosing to view the page source.  For example, if you go to the EIA page for Cushing, OK WTI daily historic prices and view the page and its source, you will see:

This is straight forward – The table is on the source page with a title (table SUMMARY), individual tables are tagged with “tr”, and the individual cells are tagged with “td”.  This is very easy for a parser to pick up and process.

Getting the Data

In the case of the CME site, when you try to view the source, no tables or tags will pop up, but there is an easy work around to get to the originating data url.  Go to the CME site in your Firefox browser and in the upper right hand corner there should be a drop down menu.  From there, go to Web Developer and select Network.  There are some options at the top of that window – “All  HTML  CSS  JS  XHR…”.  Pick XHR and reload the page.  You will see one code in particular that keeps popping up with a long, changing number after the refresh. That is going to be the site with the actual prices. Right click it, open up the link, and select “Raw Data” at the top of that page.  That is all of the data which you will parse out using the descriptors in front of the values you want.

For clarification, that long number at the end of the link you follow is Unix time (number of seconds since Jan. 01, 1970).  It will play a part in the program by using a version of that number to create a url.

Jupyter Notebook Script

We have posted the full code for pulling both the CME prices and monthly historic prices from the EIA website on our github repository HERE.  We only show oil in this example, but it is just a matter of changing the urls and switching your variable names to reflect gas.

Additions You May Want to Make

One issue with the data you get is that there could be a break in the months between historic and futures prices. The EIA is slow to update history, and futures prices move quick. We account for this in our main program with some other pulls, but you can easily add two months with your estimated/known prices to the bottom of the resulting dataframe and re-sort it if you decide to put them together.

Interested in Learning Python?

If you are new to python or are interested in getting started on it, one of the more popular distributions is Anaconda.  It is free and, in our opinion, more friendly than other python distributions when it comes to set up, managing packages, and building environments.  You can download it here and will be able to set up your own Jupyter Notebook IDE.