Commercial Reservoir Simulator with Optimization Software
In field development and management, optimization has turned out to be an integral component for decision-making. Optimization involves computationally intensive complex formulations but simplifies making decisions. For reaching the optimal solution to a defined objective function, optimization software can be combined with a numerical reservoir simulator. Hence, robust and faster results are imperative to optimization problems.
To maximize cumulative recovery and net present value (NPV), the reservoir simulator works on maximizing these predefined objective functions that can be multi-objective leading to Pareto sets with “trade-offs” between objectives. In optimization algorithms with predefined objective functions, there is a need for these objective functions to be flexible by using conditional statements through procedures, since generally they do not provide the flexibility required by the physical reservoir fluid flow phenomenon to “maneuver” throughout optimization iterations.
In this study, a commercial reservoir simulator is coupled with an optimization software. As the need was discuss earlier, conditional statements are implemented in the simulator as procedures. Operating the software/simulator combination under pseudo-dynamic objective functions is achieved through these procedures. Highest recovery for the time period mentioned in the conditional statement for the simulation is achieved by trying sets of combinations of parameters, which also makes the optimization process faster and more robust. Throught the use of these conditional statements, the procedures are able to implement piecewise objective functions as codes for a given time frame.
The objective function to be maximized by the optimization process in this study cumulative production. The optimized recoveries with pseudo-dynamic objective functions provide an enhanced recovery, as compared to that of an optimization case with predefined constant objective function in the optimization software throughout the iterations of the optimization and simulation process.
Optimization
The economic viability of a project can be increased by optimization using various methodologies. The optimization methods basically use a set of parameters, which can influence the outcome of the project, such as number of injectors, production rate and so on; to increase oil recovery from the well or to increase profitability as per the need. Most optimization problems are complex depending on various geological and production parameters varying the compilation and processing time of the problem. Optimization methods are solely based on a single or a few key parameters that need to be optimized; these parameters are known as objective functions.
Bittencourt (1997) listed a few key features that enables smooth simulation and optimization based on the constraints:
- Well constraints and properties based on the geologic region
- Technical specification of the Injector and producer
- Economic criteria for optimization
- Flexibility with respect to drilling and changes in type and completion of wells
- User defined limitation for acceptance of a generated solution
- Oil, gas and water volumes produced and injected
Optimization types
The mathematical relationship between objective function and constraints can give the optimal solution for optimization problems. The optimization problem to a large extent consists of four parts: the system model, independent variables, system boundaries (constraints) and performance criteria. The system model consists of the simulation package defining independent variables in a way that it can shape the design values. System boundaries, as the name suggests, defines the limits of the operating system model, and performance criteria defines the accuracy, duration of the system simulation and so on. Based on the complexity of the problem and adopted algorithm, problems can be defined as:
Linear and Quadratic Optimization Problems
Objective functions and their respective constraints can be defined in a linear form in accordance to their decision variable, while a solution of the same is in quadratic form with defined linear constraints containing decision variables. Both quadratic and linear problems have continuous differential and one feasible region for solution due to linear constraints with optimal solution anywhere on feasible region/surface. Quadratic optimization problems can be convex or non-convex in nature, the latter being harder to solve. A convex problem can have more than one maxima or minima points depending on the region. These regional maxima or minima are known as local points. A non-convex problem on the other hand has it regional maxima or minima at the boundary of the system with local defined optimal points. Quadratic optimization functions are more focused on solving regional/local optimization issues rather than focusing on global functions.
Integer and Constraint Optimization Problems
This kind of optimization problem can be defined broadly as mixed integer optimization problems and as constraint optimization problems. The constraint optimization problems have constraints as decision variables whereas the integer optimization problem can have only integral values in the decision variable. Constrained optimizations are defined based on the required assignment of symbolic values with predefined limitations.
Smooth Nonlinear Optimization Problems
Smooth Non-linear Optimization problems are those functions in which atleast one of the constraints or the objective function itself is a smooth non-linear function of the decision variables.
Non-smooth Optimization Problems
Non-smooth optimization functions are those functions that have a discontinuous derivative function or those that cannot be differentiated. These optimization problems are non-convex in nature, and can have multiple local optimal points and feasible regions which co-exist. These functions use extensive calculus to substitute Taylor series functions to obtain directional derivatives and sub-gradients.
Global Optimization (GO)
Inverse modelling is done using system function parameters, which are often used in reservoir modelling. Available field accumulated data is compared to results obtained from optimization solutions obtained from the objective functions after applying the constraints. Objective functions can have multiple minima and maxima, but reservoir modeling will require a global maxima or minima. Annealing process in materials is mimicked in annealing algorithms of GO. Plasencia (2012) method as well as the kinetic Monte Carlo algorithm by Henkelman (2001) can be used for global optimization.
Identification of global optimal solution involves tracing of local optimal solutions efficiently by first order saddle points. Gradient approach can be used to move from one optimal solution to another for functions that are continuous and differentiable in nature. In order to identify the nearest local minima, Gauss-Newton or Levenberg-Marquardt method can be used.
Optimization based on Reservoir Conditions
A four-step process is used to improve the recovery system. System model needs to be defined for identifying simulation behavior of design. NPV, CAPEX and OPEX costs can be used to generate the system model for production. Production of oil, water and gas along with the capital and operational expenditure are the integral parameters in economic evaluation of a reservoir and are represented as follows:
r = discount rate
n = number of periods
CEj= Operation cost in stage j ($)
COi= Operation cost of product to be produced ($/barrel)
N = Number productor well
M = Number injector well
Bis = Size of the batch of product i
Qoj = well production j (barrel)
Qri = Production requirement i
b = exponential decline constant
αj=Cost coefficient for unit j
β=Cost exponent for unit j
The solutions of these multi-functions are different and in many cases two dominant solutions are chosen and compared to identify the solution that is more acceptable. Criteria of domination can be given as “Accepted value of x is no worse with respect to all objectives when compared to rejected value and it is better in at least one objective.”
The solution to these problems is generally approached in two ways:
- Preference Based approach: In this method, different objectives are converted into a single objective optimization problem. The cost function is revised several times to find multi trade-off solutions.
- Posteriori: In this method, a solution is selected using auxillary information and user’s judgement from a wide range of values for objective parameter.
Since there are several values to the objective function, solution of the problem is not a fixed value but a subset of the range of values.
Adjoint Based Gradient Approach
Gradient is generally calculated by three approaches: forward sensitivity equation, backward approach or finite difference. Suwartadi (2011) explains use of adjoint based techniques for calculating gradient of an objective function.
Adjoint method is computationally efficient since. The process of evaluation of gradiemt involves- normal forward mode, and adjoint in backward mode. Since solution of forward mode is the input to the backward mode, the memory requirements are large. Additionally, it has been observed by Bloss (1999) and others that each new constraint in the function requires one additional Lagrange multiplier and hence a new adjoint function. Suwartadi (2011) uses NPV as a gradient function defined by the equation below:
qw= water rate at producer well
N=total simulation time
Nprod=Number of production wells
Ninj=Number of injection wells
d=discount factor
∆tn =time interval
ro=oil prices
rinj=water injection cost
rw=water separation cost
ql=well rate at injector well
Penalty Function
In this approach, penalty is added to the objective function and it is minimized to find optimum solution. This methodology is unconstrained in nature and is solved using exploratory techniques. Exploratory techniques on other hand, look for global maxima or global minima instead of local optimal solution thus making them computationally expensive. It can be defined as:
Penalty=Base+multiplier×∑(violationexponent)
Penalty function can be defined based on various attributes. One such example has been covered in Torres (2007) using necessary production time. The study computes penalty by defining the time horizon limit of production by fuzzy logic and using the constraints mentioned below.
Figure 2–Production Time (H) factor and Necessary production time (Hn) represented by fuzzy numbers
Case I: Penalty is considered as zeroif the production time satisfies the constraints defined for time horizon limit,.
Hn1≥H1 and Hn4≤H4
Case II: Condition for this case is defined as, b
Hn3≥H1, Hs4≤H4 y Hn2≤H1
For the above condition, penalty is defined as,
Pmin=H-∑i=1NHsP
Case III: Condition for this case is defined as,
Hn1≥H1, Hn2≤H4 y Hn3≤H4
For the above condition, penalty is defined as,
Pmin=∑i=1NHs-HPavg
Based on the above condition using sensitivity analysis Pavg is defined as,
Pavg=c×[∑j=1pgjx+∑k=1mhk(x)]
And objective function is revised as,
F=F±Pavg
Pseudo Function
Pseudo functions are used for two purposes: upscaling of the solution and reduction of the grid size of simulation from 3D to 2D. They are two type of pseudo functions – vertical equilibrium and dynamic equilibrium. Vertical equilibrium pseudos are used in low viscous gravity ratio. In case of high flooding ratio, dynamic pseudos’ are developed for coarse gridded models. Jacks (1973), Kyte and Berry (1975) and Starley (1988) are some of the conventional dynamc pseudos that have been compared by Stone (1991) using Buckley-Leverett-Welge method. Cao (1999) has also defined dynamic pseudo functions based on phase flow rate, average total mobility and streamtubes derived from single phase flow.
Challenges and Advances in Optimization
From the mathematical point of view, the three components of an optimization problem can be described as,
- Objective/cost function
- Constraints
- Variables/controls
Quality or accuracy of the solution depends on defining the objective function, since selection of an optimal solution depends on minimization of this function (Mata-Lima, 2011). By defining possible actions as decision variable, the effect of these actions on the decision can be converted into an optimization space (Echeverria-Ciaurri & Conn, 2012).
Optimization techniques are mainly used to reach the global optimum of a problem by relating an objective function as a maximum or minimum of a decision variables, where the algorithm used depends on the nature of the problem. Petroleum engineering optimization problems are generaly complex systems involving multiple objectives and constraints which use dynamic programming (DP), linear programming (LP), non-linear programming (NLP), mixed linear programming (MLP), and global optimization methods for optimization; a few examples can be seen in the references (Lang & Horne, 1983), (Ding, Mckee, & Nouvelles, 2011), (Currie, Novotnak, Aasboee, & Kennedy, 1997), (Thompson, Blumer, Oil, & Co, 1991), (Bieker, Slupphaug, & Johansen, 2007).
Gradient based methods and direct search methods are the commonly used numerical methods for constrained nonlinear optimiation problems. In the former, gradients (1st order derivatives) and hessians (2nd order derivatives) are used as information such as in sequential quadratic programming (SQP), augmented Lagrangian method and (nonlinear) interior point method. On the contrary, direct search methods do not use derivative information such as in Genetic Algorithm (GA), Nelder-Mead (Polytope), differential evolution and simulated annealing. Gradient based methods converge quicker than direct search methods. However, direct search methods are more tolerant to noise in the function and constraints (Numerical Nonlinear Local Optimization—Wolfram Language Documentation, 2015).
The optimization study begins by defining the objective function according to the nature of the problem. Once defined, model variables are selected as the step. This step is critical since the performance of the entire operation depends on the variable selection, where the variable can be a single model variable or a a group of them. Care must be taken to select variables with greatest influence to the objective function for best results (Leitao & Schiozer, 1999). Since reservoir optimization problems require a large number of control parameters update, the likelihood of reaching a local solution is greater than reaching a global solution; also, the process is slowed down. Re-doing the optimization by beginning from a different initial guess or using search based methods to find another local minimum is the best bet to find an optimal solution if a local solution is reached. A robust model built using consistant data is a must in reservoir simulation convergence problems. Also, care must be taken to see if the data makes sense in real world; for instance, fluid properties in the model should obey the physics of real fluids. Extrapolation of PVT properties is another major factor in convergence problems to determine optimization performance. Therefore, the model should cover operation conditions in terms of both pressure and Rs.
To maintiain operation efficiency, error codes (warnings and problems) must be paid attention to during the simulation. They can indicate changes that need to be made to get better results; this might require alteration to the model, requirements, or optimization settings. Problems with defining grid properties can lead to larger computational times or failure of the solver to approach a solution (Schlumberger, 2012). Another causal factor of reservoir simulation convergence problem is the relative permeability curves. Defining them as smooth curves without steep gradients should normally solved this problem. Moreover, initialization of the simulation model, where the initial solution is in equilibrium, is important in terms of convergence problems. In the absence of data to be changed for improving simulation efficiency, convergence criteria must be chosen for performance improvement. Results of simulation runs depend on the application of this criteria.
In cases here it is impossible to achieve the specifications defined in a simulation run, relaxing constraints or design requirements that violate the constraints most can be helpful. Once an acceptable solution to the newly relaxed constraints are found, the optimization process can be restarted by tightening some constraints again and repeating the process. Timestep is the time interval between states and the simulator performs newton iterations within each timestep to solve nonlinear changes in pressure and saturations (Halliburton, 2013). In cases where the simulator constantly chops timesteps for unknown reasons, giving a lower value for the timestepr can make the run quicker. Iteration stops are determined by tolerance values inout into the simulators. There are two sets of tolerances. The first set works on maximum changes in primary unknowns over the previous newton step. The other set is based on material balace error for each phase in each grid block. Violation of these tolerance results in the run being terminated.
For running simulations on large models, parallel run is a well-known method. For reducing computational times of large computational loads, softwares that allow networks to be transformed to workstations on a parallel virtual machine (PVM, PVME, MPI etc.) can be used to parallelize simulator runs (Leitao & Schiozer, 1999). Reduction in computation times of these optimization processes depends on the number and speed of available workstations (Khait, 2009), (Khashan, Ogbe, & Jiang, 2002), (Schiozer, 1999).
Methodology
Current Process
Figure 3–Current Process
Proposed Process
Figure 4–Proposed Process
Figure 5 – Procedure for speeding up convergence
Figure 6 – Conditional Statement Key Performance Indicator
Mathematical Description of the Problem, Solution and the Methodology
In mathematics, solving constraint optimization problem means to solve the following given system in the feasible region:
Minimize or maximize
fX Subject to
hix=0,i=1, 2, ……m
gix≤bi, i=1, 2, ……p
, where
X=(x1, x2, ……………..,xn)
is a vector of
noptimization variables.
fX:Rn→R
is defined as an objective function,
hix: Rn→R
is equality constraint functions for
i=1, 2, ……m,
gix:Rn→R
is inequality constraint function for
i=1, 2, ……p.
Optimal solution
x*gives the smallest/largest value of
fXamong other vectors that satisfy the given constaints. Constraint optimization problems can be classified based on the nature of the constraint function such as linear, nonlinear, convex and the smoothness of the function. In the following part, we will be focusing on the nonlinear optimization problems with equality and inequality constraints as in the real life applications, because of the complexity of the nature most constraints appear in the nonlinear form.
Solving Equality-constrained Nonlinear Optimization Problems:
Augmented Lagrange Multiplier (ALM) method is used to solve the following equality-constrained problem:
Min
fX,
Subject to
hiXc=0, i=1, 2, …., m,(1)
where
Xis
∈ Rnand
m<n.
This method transforms the constrained optimization problem into the unconstrained problem by adding penalty function and the following Lagrangian as mentioned at Rao (1995) :
LX,δ=fX+∑i=1mσihiX (2)
where
σi, i=1, 2, …., mare Lagrance multipliers.
Based on that Lagrangian, the new objective function
AX, σ,rkis defined and named the augmented Lagrangian function given as the following most popular form Gill, Murray, andWright (1981):
AX, σ,rk=fX+∑i=1mσi hiX+rk ∑i=1mhi2(X) (3)
For optimum values of Lagrangian say
σi*,the minimum of
AX, σ,rkcan be obtained for any value of
rk. However, since values of
σi*are not known at the beginning so the iterative scheme sould be used to find the solution of the initial stated problem.
Initially, first iteration for
k=1, all values of
σikare set to be zero and arbitrary constant value is assigned for
rk. Besides that, initial starting point
x1is chosen. For each iteration, values of
σikand
rkare updated to approximate
X*(k).
For the optimum value
σi*, the Lagrangian given at (2) should satisfy the given partial differentiation:
∂L∂xj=∂f∂xj+∑i=1mσi* ∂hi∂xj =0, j=1, 2, 3,……n (4)
Similar to that, the minimum value of the augmented Lagrangian function
AX, σ,rkneeds to satisfy the following condition as well:
∂A∂xj=∂f∂xj+∑i=1mσi ∂hi∂xj+∑i=1m2rkhi ∂hi∂xj=0, j=1, 2, 3,……n (5)
Combining right hand sides of Eqs. (4) snd (5), the following iterative scheme can be derived to update values of
σi:
σi(k+1)=σi(k)+ 2rkhiXk , i=1, 2, ….m (6)
To find the optimal solution, both
Xand
σhave to converge to
X* and
σ*.
Some important properties of Augmented Lagrange Multiplier method are useful to know as mentioned by Abdul Hameed, Kaspar, Anuradha (2018):
This method guarantees the convergence of local minimum of augmented lagrangian so this minimum value converges to the min of
fXwith constraint function
hiXonly when large penalty parameter
rkis chosen.
Besides that, penalty parameter
rkneeds to be chosen large enough so that the Hessian matrix of the
AX, σ,rkwill be positive definite. However, when values of
rkare chosen too small than the method may converhe to the minimum of augmented Lagrangian but may fail to converge to the minimum value of the objective function itself. Overall Augmented Lagrangian can be very sensitive.
Solving Equality & Inequality Constrained Nonlinear Optimization Problems:
Mixed-constrained (including equality and inequality constrained functions) optimization problems can be defined as the following general form:
Min
fX,
Subject to
hiX=0, i=1, 2, …., m, and
giX≤0, i=1, 2, …., s.
(7)
where
Xis
∈ Rnand
m, s<n.
Kaurush-Khun-Tucker (KKT) conditions are used to extend the Lagrange multipliers method to hadle the system with inequality constraints addition to the equality constraints. However, since KKT conditions can be used in analytical optimization, in this section we will show ALM method with inserted slack variables.
Since the problem involves inequalities, one should consider two regimes as whether inequality constraints affect a critical point or not. Assume that
X*is the local minimum for
fXwhich also satisfies
giX*<0, i=1, 2, …., s, then since the constraint is satisfied, so
X*will be the solution for the constrained problem. On the other side, there could be a local minimum value of
fXat the boundary of the feasible set; such that
X’is the local minimum for
fXand satisfies
giX’=0, i=1, 2, …., s. Therefore to find these critical points on the boundary of a feasible set, we should convert
giXinto the equality constraints by using the slack variables vector
Ywhich is
Y=(y1,y2,……………,ys)T:
giX+yi2=0, i=1, 2, …., s.
(8)
Then the new augmented Lagrangian function is defined as the following:
AX, σ,rk=fX+∑i=1sσi αi+rk ∑i=1mhj(X)σm+i+ rk ∑i=1mαi2 +rk ∑i=1shi2(X), (9)
Where
αican be given by the following equation:
αi=max {giX,- σi2rk} (10)
The solution to the problem defined in (7) can be found by minimizing the augmented Lagrangian function at (8). Minimum value can be found by using the following iterative formulas to find
σkand
σm+ik:
σk+1=σk+ 2rk max{giX,- σi2rk} , i=1, 2, ….m
σm+ik+1=σm+ik+ 2rkhiX, i=1,2,….s.
Convergence of Constrained Optimization Problems:
The most important part of the above mentioned methods is knowing when to stop the iterative process to get an optimal solution with the enough level of confidence. One can test the KKT conditions, for the convex programming problems, to check the solution in terms of the optimatiliy:
For an inequality constraint optimization problem such as minimizing
fxsubject to
gjX≤0, j=1,2,……s.Define set
B:
x ∈B: g1x=0, g2x=0, ………..gsx=0. Such that
Bis the set of the active constraints. The following equation can be written to check the KKT:
∑j ∈B σi∂gj∂xi=-∂f∂xi, i=1, 2, …..,n. (11)
The equation (11) can be written explicitly as the following matrix form:
G σ=F 12
Where G is
n×smatrix,
σ and Fare
s×1,
n×1vectors respectively can be shown explicity as follows:
G=∂g1∂x1 ∂g2∂x1 ⋯∂gs∂x1∂gs∂x2 ⋮⋱⋮∂g1∂xn⋯∂gs∂xn, σ=σ1σ2⋮σs-1σs
F=-∂f∂x1-∂f∂x2⋮-∂f∂xn
Using equation (12), the following expression is derived for
σ:
σ=(GTG)-1GTF (13)
If all
σ1,
σ2, …….σsare positive given by equation (13), the KKT conditions are satisfied.