Skip to main content
  • Original Article
  • Open access
  • Published:

Improved Differential Evolution with Shrinking Space Technique for Constrained Optimization

Abstract

Most of the current evolutionary algorithms for constrained optimization algorithm are low computational efficiency. In order to improve efficiency, an improved differential evolution with shrinking space technique and adaptive trade-off model, named ATMDE, is proposed to solve constrained optimization problems. The proposed ATMDE algorithm employs an improved differential evolution as the search optimizer to generate new offspring individuals into evolutionary population. For the constraints, the adaptive trade-off model as one of the most important constraint-handling techniques is employed to select better individuals to retain into the next population, which could effectively handle multiple constraints. Then the shrinking space technique is designed to shrink the search region according to feedback information in order to improve computational efficiency without losing accuracy. The improved DE algorithm introduces three different mutant strategies to generate different offspring into evolutionary population. Moreover, a new mutant strategy called “DE/rand/best/1” is constructed to generate new individuals according to the feasibility proportion of current population. Finally, the effectiveness of the proposed method is verified by a suite of benchmark functions and practical engineering problems. This research presents a constrained evolutionary algorithm with high efficiency and accuracy for constrained optimization problems.

1 Introduction

Constrained optimization problems (COPs) widely exist in various scientific and engineering fields [13], such as mechanical design, path planning, etc. Perhaps it is not easy or difficult to obtain global optimal solutions by the traditional optimization techniques for some COPs involving nonlinear inequality or equality constraints, multi-modal and non-differential objective functions. Evolutionary algorithms (EAs) cooperated with constraint-handling techniques which have obtained more and more attention because of their flexibility, effectiveness and adaptability provide an effective and powerful avenue to cope with these COPs [46]. A large number of effective constrained optimization evolutionary algorithms (COEAs) have been proposed [79]. Recently, some representative constraint-handling techniques with EAs to solve COEAs have been summarized by COELLO [10]. The most general existing constraint-handling techniques are mainly categorized into three groups. Firstly, the method based on the penalty function aimed to transform a COP into an unconstrained one by adding a penalty term to the original objective function [11, 12]. Secondly, the approach based on the feasibility-based criterion preferred to select the feasible solutions rather than the infeasible solutions into the next evolutionary process [13, 14]. Thirdly, the method based on the multi-objective optimization technique aimed to transform the COPs into the unconstrained multi-objective optimization problems and utilized multi-objective optimization technique to deal with the converted problems [15, 16].

The performance of COEAs mainly depends on the constraint-handling techniques and EAs as the search optimizer. Differential evolution (DE) originally proposed by STORN and PRICE [17] was one of the most simple and powerful population-based evolutionary algorithms for global optimization. During the past two decades, different DE optimizers with constraint-handling techniques have been successfully developed to deal with different kinds of COPs. The first attempt was the constraint adaption with DE (CADE) algorithm which introduced multi-member individuals to generate more than one offspring by DE operators [18]. A cultural DE-based algorithm with the feasibility rule was proposed by LANDA and COELLO [19], which utilized different knowledge sources to influence the mutant operator in order to accelerate convergence. A multi-member diversity-based DE (MDDE) algorithm where each parent generated more than one offspring to enhance the diversity of population was presented by MEZURA-MONTES, et al [20] to solve COPs. The dynamic stochastic selection technique was put forward by ZHANG, et al [21] under the framework of multi-member DE. TESSEMA and YEN [11] designed an adaptive penalty formulation where the feasible proportion of the current population was utilized to tune the penalty factor. In order to combine the advantages of different constraint-handling techniques, MALLIPEDDI, et al [22] proposed an ensemble of constraint-handling techniques (ECHT) with DE and evolutionary programming optimizers for coping with COPs. ELSAYED, et al [23] introduced an algorithm framework to use multiple search operators in each generation with the feasibility rule for COPs. Each combination of search operators had its own sub-population, and the size of each sub-population varied adaptively during the progress of evolution depending on the reproductive success of the search operators. Subsequently, GONG, et al [24] developed a ranking-based mutation operator with an improved dynamic diversity mechanism for COPs. A modified differential evolution algorithm [25] was proposed to deal with the dimensional synthesis of the redundant parallel robot problem.

Recently, the adaptive trade-off model (ATM) [26] has been proposed to maintain a reasonable tradeoff to select better individuals to reserve into next generation between the feasible and infeasible individuals. The principal merit of ATM was that the promising infeasible individuals could be inherited into the next evolutionary process. The ATM with evolutionary strategy (ATMES) as the search optimizer has been utilized to solve COPs. In order to reduce the computational effort, the shrinking space technique introduced by AGUIRRE, et al [27] shrank the search region according to some feedback information and directed the search effort to the promising feasible region. Subsequently, WANG, et al [28] proposed a new method named AATM with high efficiency which benefited from the virtues of shrinking space technique and ATM. The performance of AATM algorithm could promptly converge to optimal results without loss of quality and precision.

Although AATM enhances the performance of ATMES by taking advantage of the shrinking space technique to address complicated COPs with multiple constraints, it still leaves a plenty of room to develop new approaches to solve COPs for improvement of accelerating the convergence rate and enhancing the quality of solutions within the limited time, especially for complicated engineering optimization problems. When using EAs to solve COPs, the search algorithm plays a crucial role on the performance of hybrid approaches as well as the constraint-handling techniques. Hence, this study employs an advanced search algorithm (i.e. an improved DE) to further improve the performance of AATM. The improved DE employs three different characteristic mutant strategies to generate different offspring into evolutionary population. Hence, combining the advantages of an improved differential evolution with adaptive trade-off model and shrinking space technique, called ATMDE, is proposed to deal with COPs. The remainders of this paper are organized as follows. In Section 2, the definitions of COP and some relevant concepts of multi-objective optimization are given, respectively. In Section 3, the basics of DE are briefly introduced. In Section 4, the proposed ATMDE algorithm is presented in detail. In Section 5, the performances of ATMDE are tested by 18 well-known benchmark test functions from the 2006 IEEE Congress on Evolutionary Computation (IEEE CEC2006) and several engineering optimization problems. Section 6 concludes this paper.

2 Statement of the Problem

A general constrained optimization problem is formulated as

$$\begin{aligned} { \hbox{min} }\;f(\varvec{x}), \hfill \\ {\text{s}} . {\text{t}} .\left\{ {\begin{array}{*{20}l} {g_{k} (\varvec{x}) \le 0,\quad k = 1,2, \cdots ,q,} \hfill \\ {h_{k} (\varvec{x}) = 0,\quad k = q + 1,q + 2, \cdots ,m,} \hfill \\ \end{array} } \right. \hfill \\ \end{aligned}$$
(1)

where f(x) is the objective function; g(x) and h(x) are the inequality and equality constraints, respectively; x = [x 1x 2,···, x n ] is an n-dimensional vector of design variables and their allowable lower and upper boundaries are x min,j and x max,j (j = 1, 2,···, n); m is the total number of constraints; q and m − q are the numbers of inequality and equality constraints, respectively. In evolutionary optimization, equality constraints are transformed to inequality constraints as follows:

$$\left| {h_{k} (\varvec{x})} \right| - \delta \le 0,$$
(2)

where δ is a positive tolerance parameter and is recommended to be as 0.0001 [20, 21].

In addition, the degree of violation value of solution x from k-th constraint G k (x) is defined as

$$G_{k} (\varvec{x}) = \left\{ {\begin{array}{*{20}l} {\hbox{max} \{ 0,g_{k} (\varvec{x})\} ,\quad k = 1,2, \cdots ,q,} \hfill \\ {\hbox{max} \{ 0,\left| {h_{k} (\varvec{x})} \right| - \delta \} ,\quad k = q + 1,q + 2, \cdots ,m.} \hfill \\ \end{array} } \right.$$
(3)

Then, the degree of all the constraint violations of solution x can be represented as

$$G(\varvec{x}) = \sum\limits_{k = 1}^{m} {} G_{k} (\varvec{x}).$$
(4)

Since the following method utilizes the concepts of the multi-objective optimization techniques to address constraints of COPs, some related multi-objective optimization concepts are introduced in advance.

Definition 1

Pareto dominance: A vector u = (u 1, u 2,···, u k ) is said to Pareto dominate another vector v = (v 1, v 2,···, v k ), denoted as u v, only if it is satisfied:

$$\forall i \in \{ 1,2, \cdots ,k\} ,\;u_{i} \le v_{i} \quad {\text{and}}\quad \exists j \in \{ 1,2, \cdots ,k\} ,\;u_{j} < v_{j} .$$
(5)

Definition 2

Pareto optimality: u is said to be Pareto optimal only if vector v in the feasible region S doesn’t exist and v u, where v = f(v) = (f(v), G(v)) = (v 1v 2), u = f(u) = (f(u), G(u)) = (u 1u 2).

Definition 3

Pareto optimal set: The Pareto optimal set denoted as P * is defined as

$$P^{*} = \{ \varvec{u} \in S|\neg \exists \varvec{v} \in S,\varvec{v} \prec \varvec{u}\} .$$
(6)

It should be noted that individuals in the Pareto optimal set are called non-dominated individuals.

Definition 4

Pareto front: The Pareto front PF * is defined as

$$PF^{*} = \{ \varvec{f}(\varvec{u})|\varvec{u} \in P^{*} \} .$$
(7)

3 Basics of Differential Evolution

DE has been extensively applied to solve optimization problems because of its simplicity and effectiveness [17]. It does not require the binary encoding to represent solution like genetic algorithm and not employ a probability density function to self-adapt its individuals like evolution strategy. It generates new candidate solutions by combining the parent individual and several other individuals of the current population. Then a candidate individual will replace the parent only if it has better fitness value. In the following text, the specific operations including initialization, mutation, crossover and selection are introduced. Firstly, it generates NP initial population x i (i = 1, 2,···, NP) sampled from the search domain by

$$x_{i,j} = x_{{\text{min}},j} + {\text{rand}}(0,1) \times (x_{{\text{max}} ,j} - x_{{\text{min}} ,j} ),$$
(8)

where rand(0,1) means to generate a randomly real number between 0 and 1.

After initialization, a mutant strategy is adopted to generate a mutant vector v i  = (v i,1, v i,2,···, v i,n ) by its corresponding target vector x i  = (x i,1x i,2,···, x i,n ). There is a general nomenclature “DE/x/y” developed to denote the different DE mutant variants, where “DE” means differential evolution, “x” indicates which individual as the base vector is selected to be mutated, and “y” is the number of difference vectors chosen for perturbation of x. The following mutation strategies are most frequently used.

DE/rand/1:

$$\varvec{v}_{i} = \varvec{x}_{{r_{1} }} + F \times (\varvec{x}_{{r_{2} }} - \varvec{x}_{{r_{3} }} ).$$
(9)

DE/best/1:

$$\varvec{v}_{i} = \varvec{x}_{\text{best}} + F \times (\varvec{x}_{{r_{1} }} - \varvec{x}_{{r_{2} }} ).$$
(10)

DE/rand/2:

$$\varvec{v}_{i} = \varvec{x}_{{r_{1} }} + F \times (\varvec{x}_{{r_{2} }} - \varvec{x}_{{r_{3} }} ) + F \times (\varvec{x}_{{r_{4} }} - \varvec{x}_{{r_{5} }} ).$$
(11)

DE/current-to-rand/1:

$$\varvec{v}_{i} = \varvec{x}_{i} + F \times (\varvec{x}_{{r_{1} }} - \varvec{x}_{i} ) + F \times (\varvec{x}_{{r_{2} }} - \varvec{x}_{{r_{3} }} ).$$
(12)

DE/current-to-best/1:

$$\varvec{v}_{i} = \varvec{x}_{i} + F \times (\varvec{x}_{\text{best}} - \varvec{x}_{i} ) + F \times (\varvec{x}_{{r_{1} }} - \varvec{x}_{{r_{2} }} ).$$
(13)

where indices r 1, r 2, r 3, r 4 and r 5 are mutually exclusive integers randomly selected from interval [1, NP] and are also different from individual i; F is the scale factor chosen between 0 and 1; and x best denotes the best individual in the current population.

Subsequently, a trial vector u i  = (u i,1u i,2,···, u i,n ) generates by the binomial crossover or exponential crossover. The binomial crossover is utilized in this paper as follows:

$$u_{i,j} = \left\{ {\begin{array}{*{20}l} {v_{i,j} ,\quad {\text{if}}\; ( {\text{rand}}_{j} (0,1) \le CR)\;or\;j = j_{\text{rand}} ,} \hfill \\ {x_{i,j} ,\quad {\text{otherwise}} .} \hfill \\ \end{array} } \right.$$
(14)

where CR is the probability rate of crossover operator and j rand is a randomly integer chosen from the range [1, n]. The binomial crossover operator inherits the j-th variable of mutant vector v i to its corresponding element in the trial vector u i if it meets the condition. Taking “DE/rand/1/bin” strategy as an example, the schematic diagram of mutation and crossover operation is illustrated as shown Fig. 1. The black square represents the mutant vector, which is the mutant vector u i generated by mutant strategy. The two triangles \(\varvec{u}_{i}^{1}\) and \(\varvec{u}_{i}^{2}\) represent the two possible locations for the trial vector after performing binomial crossover operation.

Fig. 1
figure 1

Schematic diagram of “DE/rand/1/bin” strategy (2-D space)

Then, the target vector x i,g compares with its trial vector u i,g by their fitness values, and the better one x i,g+1 will survive into the next generation population. The selection operation expresses as

$$\varvec{x}_{i,g + 1} = \left\{ {\begin{array}{*{20}l} {\varvec{u}_{i,g} ,\quad {\text{if}}\;f (\varvec{u}_{i,g} ) \le f (\varvec{x}_{i,g} ),} \hfill \\ {\varvec{x}_{i,g} ,\quad {\text{otherwise}} .} \hfill \\ \end{array} } \right.$$
(15)

The above steps repeat generation by generation until the termination criterion is met.

4 Proposed Algorithm: ATMDE

The performance of COEAs mainly depends on the search ability of evolutionary algorithm and the effectiveness of constraint-handling technique. Hence, the proposed algorithm ATMDE utilizes an improved DE as search optimizer to reproduce offspring and introduces the adaptive trade-off model as the constraint-handling technique to select better individuals to retain into the next population. Furthermore, in order to reduce the redundant search region, the shrinking space technique is employed to enhance the convergence performance. This section will introduce the three core parts of ATMDE algorithm in detail, respectively.

4.1 Improved DE

To balance the convergence rate and accuracy of solution, an improved DE is used as the search engine for the proposed ATMDE algorithm. The set of offspring individuals Q g is generated by the following three different mutant strategies (i.e. DE/rand/best/1, DE/current-to-rand/1, and DE/rand/2) which combine the advantages of different mutant strategies to generate individuals in order to increase the maximum probability of generating better offspring into evolutionary population. The strategies DE/current-to-rand/1 and DE/rand/2 have the strong explorative ability to generate new promising individuals to add into the evolutionary population, and the strategy DE/rand/best/1 has good exploitative ability to reproduce better individuals around the best individual. A good tradeoff between the global and local search performance can be achieved by combing the three different mutant strategies. The framework of generating offspring is shown in Fig. 2. Each individual in the parent population is employed to generate three different offspring with three different mutant strategies and binomial crossover, and then the better individuals retaining into the next generation are chosen from the new offspring and parent population by ATM strategy.

Fig. 2
figure 2

Flowchart of generating offspring

The implementation of constructing “DE/rand/best/1” strategy is explained as follows. At the beginning, the “DE/rand/1” strategy is introduced to maintain the diversity of population in order to prevent the population from being stuck in a local optimum. This strategy has the ability to enhance the global search ability because the new individuals could learn the information from other individuals randomly chosen from the whole population. Then it is necessary to accelerate the convergence of the evolutionary population, so the “DE/best/1” strategy is employed to speed up convergence as the feasibility proportion of current population increases. The ‘‘DE/best/1’’ strategy utilizes the information of the best individual in the current population to generate new individual which can enhance the convergence speed. Hence, the proposed strategy “DE/rand/best/1” as shown in Algorithm 1 is constructed to balance diversity and convergence speed, which combines the “DE/rand/1” strategy and “DE/best/1” strategy through the feasibility proportion of current population. Specially, if a value randomly generated from [0, 1] is greater than the feasibility proportion of current population φ, the “DE/rand/1” strategy is adopted. Otherwise, the “DE/best/1” strategy is employed.

Algorithm 1 The “DE/rand/best/1” strategy

if rand(0, 1) > φ, where φ denotes the feasibility proportion of current population

$${\varvec{v}}_{i} = {\varvec{x}}_{{r_{1} }} + F \times ({\varvec{x}}_{{r_{2} }} - {\varvec{x}}_{{r_{3} }} )\quad \#{\text{DE/rand/1}}\#$$

else

$${\varvec{v}}_{i} = {\varvec{x}}_{\text{best}} + F \times ({\varvec{x}}_{{r_{1} }} - {\varvec{x}}_{{r_{2} }} )\quad \#{\text{DE/best/1}}\#$$

end

4.2 Adaptive Trade-Off Model

Generally, a constraint-handling technique to address constraints experiences three different situations in the whole evolutionary process: (1) the infeasible situation only includes the infeasible solutions; (2) the semi-feasible situation includes the feasible and infeasible individuals simultaneously; and (3) the infeasible situation only includes the infeasible individuals. The ATM strategy aims to construct an effective tradeoff scheme to address constraints for each situation according to their corresponding characteristics.

4.2.1 Infeasible Situation

In the infeasible situation, a hierarchical non-dominated selection strategy is introduced to choose individuals from Pareto front into the next population along with evolutionary process and is executed as follows: only the first half of non-dominated individuals with smaller constraint violations are selected to offspring population and are immediately eliminated from the parent population. This process repeats until the number of individuals reaches the size of the offspring population.

4.2.2 Semi-feasible Situation

In this situation, in order to balance the influence of objective value and constraint violation, the adaptive fitness transformation method is employed to calculate the fitness function f fit(x i ) of individual x i . Firstly, the population is divided into the feasible individual group (Z 1) and infeasible individuals group (Z 2). The objective function value f(x i ) of solution x i is converted into

$$ f^{'} ({{x}}_{i} ) = \left\{ {\begin{array}{*{20}l} {f({{x}}_{i} ), i \in Z_{1} ,} \hfill \\ {\hbox{max} \{ \varphi f({{x}}_{\text{best}} ) + (1 - \varphi )f({{x}}_{\text{worst}} ),f({{x}}_{i} )\} ,{\kern 1pt} {\kern 1pt} {\kern 1pt} i \in Z_{2} ,} \hfill \\ \end{array} } \right. $$
(16)

where \(f^{'} ({\varvec{x}}_{i} )\) is the converted objective function’s value of solution x i , and x best and x worst are the best and worst solution in the group Z 1, respectively. In order to assign equal importance to different objective functions, it is normalized as

$$ f_{\text{nor}} ({{x}}_{i} ) = \frac{{f^{'} ({{x}}_{i} ) - \mathop {\hbox{min} }\limits_{{j \in Z_{1} \cup Z_{2} }} f^{'} ({{x}}_{j} )}}{{\mathop {\hbox{max} }\limits_{{j \in Z_{1} \cup Z_{2} }} f^{'} ({{x}}_{j} ) - \mathop {\hbox{min} }\limits_{{j \in Z_{1} \cup Z_{2} }} f^{'} ({{x}}_{j} )}} . $$
(17)

The constraint violation value calculated by Eq. (4) is also normalized as

$$ G_{\text{nor}} ({{x}}_{i} ) = \left\{ {\begin{array}{*{20}l} {0, \quad i \in Z_{1} ,} \hfill \\ {\frac{{G({{x}}_{i} ) - \mathop {min}\limits_{{j \in Z_{2} }} G({{x}}_{j} )}}{{\mathop {max}\limits_{{j \in Z_{2} }} G({{x}}_{j} ) - \mathop {min}\limits_{{j \in Z_{2} }} G({{x}}_{j} )}},{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}\quad i \in Z_{2} .} \hfill \\ \end{array} } \right. $$
(18)

Eventually, the final fitness function of solution x i is calculated by

$$ f_{\text{fit}} ({{x}}_{i} ) = f_{\text{nor}} ({{x}}_{i} ) + G_{\text{nor}} ({{x}}_{i} ),\;i \in Z_{1} \cup Z_{2} . $$
(19)

The individuals are ranked based on the values of f fit(·) in ascending order, and the individuals with smaller values are chosen to add into the offspring population until reaching its allowable size.

4.2.3 Feasible Situation

In this feasible situation, the constraint violations of COPs with zero are equivalent to be one of the unconstrained optimization problems because constraint violations of every individual are zero. Hence, only objective function is required to be considered, and Eq. (19) can be also used as a criterion to select better individuals because G nor(·) is zero.

4.3 Shrinking Space Technique

The shrinking space technique is one of the most pivotal ingredients of IS-PAES [27] and AATM [28]. This technique aims to reduce the search region to focus the computational effort on the specific promising feasible. The main procedure of the shrinking space technique is carried out as Algorithm 2, where T denotes that the technique is performed at every T generations, α i is a threshold number, β is a reduced factor, and \(\bar{x}_{pob,i}\) and \(\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{x}_{pob,i}\) denote the upper and lower bounds of the i-th variable in the selected offspring population, respectively. Afterward, the following specific operations are performed to shrink the search space around the promising individuals to determine the new boundaries for design variables.

figure a

4.4 Framework of ATMDE

ATMDE algorithm including an improved differential evolution and adaptive trade-off model and shrinking space technique is constructed to deal with COPs, and the main procedure of ATMDE is shown in Fig. 3. Firstly, it randomly generates NP individuals from the search domain [x min, x max] and then calculates the constraint value G(x), the function value f(x) and the feasibility proportion of current population φ. Secondly, it generates 3NP offspring individuals from the parent population P g by the improved DE operator. Thirdly, the better NP individuals are selected from the combined population Q g into the next generation by ATM strategy, and then the shrinking space technique is employed to reduce the search region to focus the search effort on the promising feasible region when it is satisfied the given condition. Finally, the procedures repeat until meeting the stopping criterion (the maximum generation or the maximum function fitness evaluations, max_FFEs).

Fig. 3
figure 3

Framework of ATMDE

5 Benchmark Test Functions

In this part, the performance of the proposed algorithm is verified by 18 benchmark functions from the IEEE CEC2006 [29]. The details of these test functions are listed in Table 1. In this table, ρ = F/S is the estimated ratio between the feasible region and whole search space, where F is the number of feasible ones in S = 1,000,000 randomly generated from search domain [x min, x max]. N denotes the number constraints.

Table 1 Details about 18 benchmark functions

5.1 Parameter Settings

For the numerical simulations, the following parameter settings are utilized unless some changes are mentioned: population size NP = 50, tolerance of equality δ = 1 × 10−4, crossover rate CR = 0.9, scaling factor F = 0.8, maximum generation g = 600. Meanwhile, the parameters of the shrinking space technique are set as:

$$\begin{aligned} T = 20, \hfill \\ \beta = \sqrt[n]{0.02}, \hfill \\ {\alpha_{i}} = \left( {{{x}_{{\hbox{max}} ,i}^{0}} - {{x}_{{\hbox{min}} ,i}}^{0} } \right)/\left( {20 \times 3^{{\lg_{{}}^{{\left( {{{x}_{{\hbox{max}} ,i}}^{0} - {{x}_{{\hbox{min}} ,i}}^{0} } \right)}} }} } \right). \hfill \\ \end{aligned}$$
(20)

To compare the robustness of different algorithms, benchmark functions are optimized at 30 independent runs. Then their statistical performances of the optimal solutions such as mean, standard deviation criteria are utilized to compare.

5.2 General Performance of ATMDE

The numerical results of the eighteen-benchmark functions obtained by the ATMDE algorithm are summarized in Table 2. This table includes the known “optimal” solution for each benchmark function and the “best”, “median”, “mean”, “worst”, and “standard deviation” of each test function solved by the ATMDE algorithm. From Table 2, it shows that ATMDE has the strong ability to converge to the global optima for all test functions expect for functions g02 and g19. However, the benchmark functions g02 and g19 solved by ATMDE are extremely close to the known “optimal” values with small standard deviations. The rest sixteen test functions (g01, g03, g04, g05, g06, g07, g08, g09, g10, g11, g12, g14, g15, g16, g18, g24) can be found consistently to achieve “optimal” values in terms of the “best”, “median”, “mean”, “worst”, and “standard deviation” criteria. Especially, the functions g01 and g12 can both converge to “optimal” values and their standard deviations are zero, which means that 30 independent runs can obtain their corresponding optimal values.

Table 2 Results obtained by ATMDE for 18 benchmark test function over 30 independent runs

5.3 ATMDE Compared with AATM

The principal aim of this part verifies that the improved DE as the search optimizer is very effective and can be utilized to further enhance the performance of AATM. To make a fair comparison, the results of benchmark functions by AATM are obtained from the original literature [28]. The comparison results between AATM and ATMDE are listed in Table 3. w/t/l denotes that the proposed ATMDE wins in w functions, equals to t functions, and loses in l functions, compared with AATM algorithm. The results by ATMDE is significantly better than those solved by AATM in 8, 11, 11, and 15 functions in terms of the “best”, “mean”, “worst”, and “standard deviation” criteria, respectively. It equals to its corresponding results in 10, 7, 7, and 1 functions from the above criteria. For the standard deviations, the results only lose in functions g08 and g24, but their differences are extremely close and they both can achieve the “optimal” results with exceedingly small standard deviation. Based on the above comparison, it is clear that ATMDE achieves the competitive better performance than AATM in terms of the quality of the results by these benchmark functions.

Table 3 Comparison results of ATMDE and AATM on 18 benchmark test functions

Furthermore, the computational cost of ATMDE and AATM both are relatively low compared with the IS-PAES algorithm [27], but the performance of ATMDE is better than those solved by AATM in terms of quality of results. It should be noted that comparison results between AATM and IS-PAES are shown in the reference [28] in which AATM with smaller fitness function evaluations (FFEs) has better performance than IS-PAES. Hence, ATMDE is an effective and efficient algorithm with limited FFEs for solving COPs.

5.4 Effectiveness of the “DE/rand/best/1” Strategy

In order to verify the effectiveness of the proposed “DE/rand/best/1” strategy, 18 test functions are also employed to perform another numerical simulation (i.e. ATMDE1) which only “DE/rand/1” without “DE/best/1” strategy is used to generate the first offspring y 1 in the whole search process. For each function, 30 independent runs are also conducted without changing any parameter settings. The comparing results of ATMDE and ATMDE1 summarizes in Table 4. Eleven functions (i.e. g04, g05, g06, g08, g09, g11, g12, g14, g15, g18, g24) can consistently converge to the global optima solved by both ATMDE and ATMDE1. However, the results of the seven functions (i.e. g01, g02, g03, g07, g10, g16, g19) solved by ATMDE can achieve the global optima but the ATMDE1 cannot consistently obtain ones especially for the functions g02, g03 and g10. More specifically, the results achieved by ATMDE are better than those solved by ATMDE1 in 6, 6, 7, 7 and 14 functions in terms of the “best”, “median”, “mean”, “worst”, and “standard deviation” criteria, respectively. It ties its corresponding results in 12, 12, 11, 11 and 3 functions from the above criteria. For the standard deviation criterion, the function g11 solved by ATMDE1 is smaller than one by ATMDE, but they both can obtain the optimal result with an exceedingly small difference. Based on the above analysis, the proposed “DE/rand/best/1” strategy is a very important part of the proposed ATMDE algorithm.

Table 4 Results obtained by ATMDE and ATMDE1 on 18 benchmark test functions

5.5 Four Mechanical Benchmark Engineering Designs

The four mechanical benchmark engineering problems are used by many researchers to demonstrate the performance of different algorithms. Different characteristics of objective functions and constraints are illustrated as shown in Table 5. The four mechanical engineering designs [28] are the minimum cost of a weld-beam design, the minimum weight of a spring design, the minimum weight of a speed reducer design and the minimum volume of a three-bar truss design, respectively. Table 6 summarizes the comparative results of these problems solved by ATMDE and AATM in terms of the “best”, “mean”, “worst”, and “standard deviation” metrics. It shows that the ATMDE has the better statistically quality and robustness than AATM under the same number of function fitness evaluations in terms of the selected performance criteria. Moreover, the four standard deviations obtained by ATMDE are relatively small, which is a crucial feature for application of the approach for solving the practical world problems. Table 7 lists the best design variables obtained by ATMDE and AATM for four engineering design problems associated with their corresponding optimal results, which show the ATMDE algorithm can obtain better solution than the AATM.

Table 5 Main features for each engineering design problem
Table 6 Results about four benchmark engineering design problems
Table 7 Best design variables for four benchmark engineering design problems

6 Engineering Applications

6.1 Vehicle Crashworthiness Problem

In the automotive industry, structural optimization design for vehicle crashworthiness has become a paramount research field. In this paper, different characteristics of low- and high-speed crashworthiness are considered simultaneously [30]. For the frontal impact, the most crucial energy absorbing components including rail, collision beam and stiffener can directly affect the performance of vehicle crashworthiness and safety. Therefore, the total mass M(x) of selected parts including collision beam, stiffener, front rail and front rail cover as shown in Fig. 4 is considered as our optimization objective and it is also subjected to acceleration, energy-absorbing and maximum intrusion constraints. Then, the crashworthiness problem could be formulated as

$$ \begin{array}{*{20}l} {\hbox{min} f({{x}}) = M({{x}}),} \hfill \\ {{\text{s}} . {\text{t}} .\;\left\{ {\begin{array}{*{20}l} {g_{1} ({{x}}) = \bar{a}({{x}}) - 35 \le 0,} \hfill \\ {g_{2} ({{x}}) = E({{x}}) - 300 \le 0,} \hfill \\ {g_{3} ({{x}}) = I_{\text{up}} ({{x}}) - 350 \le 0,} \hfill \\ {g_{4} ({{x}}) = I_{\text{down}} ({{x}}) - 200 \le 0,} \hfill \\ \end{array} } \right.} \hfill \\ \end{array} $$
(21)

where x = [x 1,x 2,x 3,x 4,x 5], 2 mm ≤ x 1 ≤ 3 mm, 1 mm ≤ x 2x 3 ≤ 2.5 mm, 1.5 mm ≤ x 4,x 5 ≤ 3 mm, \(\bar{a}({\varvec{x}})\) is the mean value of integral acceleration, E(x) is the energy-absorbing of inner and outer front rail, I up(x) and I down(x) are the intrusion of the two points at the engine as shown in Fig. 5, respectively.

Fig. 4
figure 4

Selected design variables

Fig. 5
figure 5

Intrusion measured by the selected points of engine

The finite element model (FEM) of the vehicle including 755 parts and 977 742 elements is established for the above objective and constraints. To improve efficiency, the response surfaces are established based on the samples by Latin hypercube sampling method. Then, the minimum mass M(x) solved by ATMDE algorithm is 10.53 kg and its corresponding five design variables are 2.00 mm, 2.50 mm, 2.50 mm, 2.76 mm and 1.68 mm, respectively. Under this circumstance, values of constraints are 0 g, −4208.33 J, − 60.74 mm, −0.0002 mm, respectively. Specifically, the mean value of integral acceleration \(\bar{a}({\varvec{x}})\) is 35 g, which can effectively protect passengers in the automobile when the collision inevitably occurs. Meanwhile, the inner and outer front rail can absorb 3908.33 J. In addition, the intrusions of upper and lower point at the engine are 289.26 mm and 200 mm, which can effectively reduce the occupants’ injuries to protect the passengers’ safety.

6.2 Structural Optimization Design of Tablet Computer

Currently, Tablet computer is one of typical popular consumer electronic devices, which have high-integrated density and large power dissipation. It is inevitably for its structural design to consider various aspects of design requirements, such as appearance, portability, operating safety, etc. Hence, structural optimization design of tablet computer should be guaranteed to work well under different conditions. This subsection considers the structural optimization design of a 7-inches tablet computer, as illustrated in Fig. 6 which mainly includes the touch screen, the display, the battery, the mainboard, the inner bracket, the front shell and the back shell [31]. Our optimization objective mainly considers the minimization of tablet’s thickness D(x). The design problem should be satisfied four practical work conditions including high-temperature constraint g 1(x), room-temperature constraint g 2(x), and alternating temperature constraint g 3(x) and free fall constraint g 4(x). Hence, this optimization problem could be formulated as

$$ \begin{array}{*{20}l} {\hbox{min} f({{x}}) = D({{x}}),} \hfill \\ {{\text{s}} . {\text{t}} .\;\left\{ {\begin{array}{*{20}l} {g_{1} ({{x}}) = T^{\text{CH}} \left( {{x}} \right) - 65 \le 0,} \hfill \\ {g_{2} ({{x}}) = T^{\text{SH}} \left( {{x}} \right) - 40 \le 0,} \hfill \\ {g_{3} ({{x}}) = \varGamma^{\text{BA}} \left( {{x}} \right) - 24 \le 0,} \hfill \\ {g_{4} ({{x}}) = \varGamma^{\text{TS}} \left( {{x}} \right) - 100 \le 0,} \hfill \\ \end{array} } \right.} \hfill \\ \end{array} $$
(22)

where T CH(x) denotes the temperature of the chip on the main board under the high-temperature (45 °C); T SH(x) is the shell surface temperature with a full load under the room temperature (25 °C) for an hour continuously work; Γ BA(x) is the thermal stress of the battery and Γ TS(x) is the maximal stress of the touch screen under the collision of the 0.5 m-height free fall. The design variables are the thickness of the front shell x 1, the thickness of the display x 2, the thickness of the inner bracket x 3, the thickness of the back shell x 4, respectively. And their design values should be restricted to the domains 4 mm ≤ x 1 ≤ 6 mm, 0.5 mm ≤ x 2,x 3,x 4 ≤ 2 mm.

Fig. 6
figure 6

7-inch tablet computer [31]

Four finite element models (FEM) are constructed for the above four performance constraints. To improve efficiency, the four corresponding response surfaces are established based on the given samples. Furthermore, the accuracy of the response surfaces is verified. Then the ATMDE algorithm is utilized to solve the tablet computer optimization problem. The structural thickness of optimized tablet computer is 6.42 mm which is a 31.7% reduction in compared with that of the original design (6.00 mm, 1.20 mm, 1.20 mm, 1.00 mm), and its design variables are 4.00 mm, 0.51 mm, 1.41 mm and 0.50 mm, respectively. Under this circumstance, the temperature of the chip is 62.05 °C and the temperature of shell surface is 37.66 °C which can ensure consumer daily-using comfortably. The thermal stress of battery is about 24 MPa, which can make sure the operating safety in daily. The maximal stress of touch screen is about 100 MPa, which can avoid the device broken during the collision of 0.5 m free fall. This optimized structural design is meaningful because the consumers are satisfied with the final design with better the appearance and portability for the tablet.

7 Conclusions

  1. (1)

    An improved differential evolution with shrinking space technique and adaptive trade-off model, named ATMDE, is proposed to solve constrained optimization problems with high accuracy and robustness.

  2. (2)

    The new “DE/rand/best/1” mutant strategy is constructed to generate offspring by the feasibility proportion of the current population, which could enhance performance of the ATMDE illustrated by results of test functions.

  3. (3)

    In comparison with AATM algorithm, ATMDE achieves better performance verified by the simulation results of eighteen benchmark test functions from the IEEE CEC2006.

  4. (4)

    The ATMDE is employed to optimize the structural optimization design of tablet computer, and the optimized thickness is a 31.7% reduction in compared with that of the original design.

References

  1. HULL P V, TINKER M L, DOZIER G. Evolutionary optimization of a geometrically refined truss[J]. Structural and Multidisciplinary Optimization, 2006, 31(4): 311–319.

    Article  Google Scholar 

  2. ZHAO Y, CHEN G, WANG H, et al. Optimum selection of mechanism type for heavy manipulators based on particle swarm optimization method[J]. Chinese Journal of Mechanical Engineering, 2013, 26(4): 763–770.

    Article  Google Scholar 

  3. TANG Q, LI Z, ZHANG L, et al. Effective hybrid teaching- learning-based optimization algorithm for balancing two-sided assembly lines with multiple constraints[J]. Chinese Journal of Mechanical Engineering, 2015, 28(5): 1067–1079.

    Article  Google Scholar 

  4. WANG Y, LIU H, CAI Z, et al. An orthogonal design based constrained evolutionary optimization algorithm[J]. Engineering Optimization, 2007, 39(6): 715–736.

    Article  MathSciNet  Google Scholar 

  5. WANG L, LI L. An effective differential evolution with level comparison for constrained engineering design[J]. Structural and Multidisciplinary Optimization, 2010, 41(6): 947–963.

    Article  Google Scholar 

  6. CHEN D N, ZHANG R X, YAO C Y, et al. Dynamic topology multi force particle swarm optimization algorithm[J]. Chinese Journal of Mechanical Engineering, 2016, 29(1): 124–135.

    Article  Google Scholar 

  7. RAO R, SAVSANI V, BALIC J. Teaching-learning-based optimization algorithm for unconstrained and constrained real-parameter optimization problems[J]. Engineering Optimization, 2012, 44(12): 1447–1462.

    Article  Google Scholar 

  8. QUARANTA G, FIORE A, MARANO G C. Optimum design of prestressed concrete beams using constrained differential evolution algorithm[J]. Structural and Multidisciplinary Optimization, 2014, 49(3): 441–453.

    Article  Google Scholar 

  9. ZHAO Y, WANG H, WANG W, et al. New hybrid parallel algorithm for variable-sized batch splitting scheduling with alternative machines in job shops[J]. Chinese Journal of Mechanical Engineering, 2010, 23(4): 484–495.

    Article  Google Scholar 

  10. COELLO C A C. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of the state of the art[J]. Computer Methods in Applied Mechanics and Engineering, 2002, 191(11): 1245–1287.

    Article  MathSciNet  MATH  Google Scholar 

  11. TESSEMA B, YEN G G. An adaptive penalty formulation for constrained evolutionary optimization[J]. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 2009, 39(3): 565–578.

    Article  Google Scholar 

  12. DEB K, DATTA R. A bi-objective constrained optimization algorithm using a hybrid evolutionary and penalty function approach[J]. Engineering Optimization, 2013, 45(5): 503–527.

    Article  MathSciNet  Google Scholar 

  13. DEB K. An efficient constraint handling method for genetic algorithms[J]. Computer Methods in Applied Mechanics and Engineering, 2000, 186(2): 311–338.

    Article  MATH  Google Scholar 

  14. BALAMURUGAN R, RAMAKRISHNAN C, SWAMINATHAN N. A two phase approach based on skeleton convergence and geometric variables for topology optimization using genetic algorithm[J]. Structural and Multidisciplinary Optimization, 2011, 43(3): 381–404.

    Article  Google Scholar 

  15. Venkatraman S, YEN G G. A generic framework for constrained optimization using genetic algorithms[J]. IEEE Transactions on Evolutionary Computation, 2005, 9(4): 424–435.

    Article  MathSciNet  Google Scholar 

  16. JIAO L, LI L, SHANG R, et al. A novel selection evolutionary strategy for constrained optimization[J]. Information Sciences, 2013, 239: 122–141.

    Article  MathSciNet  Google Scholar 

  17. STORN R, PRICE K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces[J]. Journal of Global Optimization, 1997, 11(4): 341–359.

    Article  MathSciNet  MATH  Google Scholar 

  18. STORN R. System design by constraint adaptation and differential evolution[J]. IEEE Transactions on Evolutionary Computation, 1999, 3(1): 22–34.

    Article  Google Scholar 

  19. LANDA B R, COELLO C A C. Cultured differential evolution for constrained optimization[J]. Computer Methods in Applied Mechanics and Engineering, 2006, 195(33): 4303–4322.

    Article  MathSciNet  MATH  Google Scholar 

  20. MEZURA-MONTES E, COELLO C A C, VEL Z J, et al. Multiple trial vectors in differential evolution for engineering design[J]. Engineering Optimization, 2007, 39(5): 567–589.

    Article  MathSciNet  Google Scholar 

  21. ZHANG M, LUO W, WANG X. Differential evolution with dynamic stochastic selection for constrained optimization[J]. Information Sciences, 2008, 178(15): 3043–3074.

    Article  Google Scholar 

  22. MALLIPEDDI R, SUGANTHAN P N. Ensemble of constraint handling techniques[J]. IEEE Transactions on Evolutionary Computation, 2010,14(4): 561–579.

    Article  Google Scholar 

  23. ELSAYED S M, SARKER R A, ESSAM D L. Multi-operator based evolutionary algorithms for solving constrained optimization problems[J]. Computers & Operations Research, 2011, 38(12): 1877–1896.

    Article  MathSciNet  MATH  Google Scholar 

  24. GONG W, CAI Z, LIANG D. Engineering optimization by means of an improved constrained differential evolution[J]. Computer Methods in Applied Mechanics and Engineering, 2014, 268: 884–904.

    Article  MathSciNet  MATH  Google Scholar 

  25. WANG C, FANG Y, GUO S. Multi-objective optimization of a parallel ankle rehabilitation robot using modified differential evolution algorithm[J]. Chinese Journal of Mechanical Engineering, 2015, 28(4): 702–715.

    Article  Google Scholar 

  26. WANG Y, CAI Z, ZHOU Y, et al. An adaptive tradeoff model for constrained evolutionary optimization[J]. IEEE Transactions on Evolutionary Computation, 2008, 12(1): 80–92.

    Article  Google Scholar 

  27. AGUIRRE A H, RIONDA S B, COELLO C A C, et al. Handling constraints using multiobjective optimization concepts[J]. International Journal for Numerical Methods in Engineering, 2004, 59(15): 1989–2017.

    Article  MathSciNet  MATH  Google Scholar 

  28. WANG Y, CAI Z, ZHOU Y. Accelerating adaptive trade-off model using shrinking space technique for constrained evolutionary optimization[J]. International Journal for Numerical Methods in Engineering, 2009, 77(11): 1501–1534.

    Article  MathSciNet  MATH  Google Scholar 

  29. LIANG J J, RUNARSSON T, MEZURA-MONTES E, et al. Problem definitions and evaluation criteria for the CEC 2006 special session on constrained real-parameter optimization[R]. Technical report, Nanyang Technological University, Singapore. 2006.

  30. JIANG C, DENG S L. Multi-objective optimization and design considering automotive high-speed and low-speed crashworthiness[J]. Chinese Journal of Computational Mechanics, 2014, 31(4): 474–479. (in Chinese)

    Google Scholar 

  31. HUANG Z, JIANG C, ZHOU Y, et al. Reliability-based design optimization for problems with interval distribution parameters[J]. Structural and Multidisciplinary Optimization, 2016, 1–16, doi:10.1007/s00158-016-1505-3.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yadong XU.

Additional information

Supported by National Science Foundation for Excellent Young Scholars, China (Grant No. 51222502), Funds for Distinguished Young Scientists of Hunan Province, China (Grant No. 14JJ1016), and Major Program of National Natural Science Foundation of China (Grant No. 51490662).

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

FU, C., XU, Y., JIANG, C. et al. Improved Differential Evolution with Shrinking Space Technique for Constrained Optimization. Chin. J. Mech. Eng. 30, 553–565 (2017). https://doi.org/10.1007/s10033-017-0130-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10033-017-0130-4

Keywords