The aim of this work is to present a novel approach based on the artificial neural network for finding the numerical solution of first order fuzzy differential equations under generalized H-derivation. The differentiability concept used in this paper is the generalized differentiability since a fuzzy differential equation under this differentiability can have two solutions. The fuzzy trial solution of fuzzy initial value problem is written as a sum of two parts. The first part satisfies the fuzzy condition, it contains no adjustable parameters. The second part involves feed-forward neural networks containing adjustable parameters. Under some conditions the proposed method provides numerical solutions with high accuracy.
Nowadays, fuzzy differential equations (FDEs) is a popular topic studied by many researchers since it is utilized widely for the purpose of modeling problems in science and engineering. Most of the practical problems require the solution of a FDE which satisfies fuzzy initial or fuzzy boundary conditions, therefore, a fuzzy initial or fuzzy boundary problem should be solved. However, many fuzzy initial or fuzzy boundary value problems could not be solved exactly, sometimes it is even impossible to find their analytical solutions. Thus, considering their approximate solutions is becoming more important 1.
The theory of FDE was first formulated by Kaleva and Seikkala. Kaleva had formulated FDE in terms of the Hukuhara derivative (H-derivative). Buckley and feuring have given a very general formulation of a first-order fuzzy initial value problem. They first find the crisp solution, make it fuzzy and then check if it satisfies the fuzzy differential equation 2.
In recent years artificial neural network (ANN) for estimation of the ordinary differential equation (ODE) and partial differential equation (PDE) has been used. We briefly review some articles in the literature concerning the differential equations. In (1990) lee, Kang 3 used parallel processor computers to solve a first order differential equation with Hopfield neural network models. In (1994) Meade, Fernandez 4, 5 solved linear and non-linear ODEs by using feed-forward neural networks (FFNN) architecture and B-splines of degree one. In (1997) Lagaris, Likas, et al. 6, 7 used ANN for solving ODEs and PDEs with the initial / boundary value problems. In (1999) Liu, Jammes 8 developed some properties of the trial solution to solve the ODEs by using ANN. In (2003) Ali, Ucar, et al. 9 solved the vibration control problems by using ANN. In (2004) Tawfiq 10 presented and developed supervised and unsupervised algorithms for solving ODE and PDE. In (2006) malek, shekari 11 presented numerical method based on ANN and optimization techniques which the higher-order ODE answers approximates by finding a package form analytical of specific functions. In (2008) Pattanaik, Mishra 12 applied and developed some properties of ANN for solution of PDE in RF Engineering. In (2010) Baymani, Kerayechian, et al. 13 proposed ANN approach for solving stokes problems. In (2011) Oraibi 14 designed FFNN for solving ordinary initial value problem. In (2012) Ali 15 designed fast FFNN to solve two point boundary value problems. In (2013) Hussein 16 designed fast FFNN to solve singular boundary value problems. In (2014) Tawfiq, Al-Abrahemee 17 designed ANN to solve singular perturbation problems, and other researchers.
Numerical solution of FDE by using ANN is the subject of a very modern because it only goes back to 2010. In (2010) Effati and pakdaman 18 used ANN for solving FDE, they used for the first time the ANN to approximate fuzzy initial value problems. In (2012) Mosleh, Otadi 19 used ANN for solving fuzzy Fredholm integro-differential equations. In (2013) Ezadi, Parandin, et al. 20 used ANN based on semi-Taylor series to solve first order FDE. In (2016) Suhhiem 21 developed and used fuzzy ANN for solving fuzzy and non-fuzzy differential equations.
In 2008, the concept of the generalized Hukuhara – differentiability is studied by Chalco-Cano and Roman Flores 22, 23 to solve FDE. In this work, for solving FDE Under Generalized H – Derivation, we present modified method which relies on the function approximation capabilities of FFNN and results in the construction of a solution written in a differentiable, closed analytic form. This form employs FFNN as the basic approximation element, whose parameters (weights and biases) are adjusted to minimize an appropriate error function. To train the ANN which we design, we employ optimization techniques, which in turn require the computation of the gradient of the error with respect to the network parameters. In this proposed approach the model function is expressed as the sum of the two terms: the first term satisfies the fuzzy initial / fuzzy boundary conditions and contains no adjustable parameters. The second term can be found by using FFNN, which is trained so as to satisfy the FDE. It is necessary to note that the solution of the FDE by using ANN based on conversion the FDE into a system of ODEs.
In this section, the basic notations which are used in fuzzy calculus are introduced
Definition (1), 19: The r-level (or r-cut) set of a fuzzy set labeled by
, is the crisp set of all
in
(universal set) such that:
; i.e.
![]() | (1) |
Definition (2), 20: Extension Principle
Let be the Cartesian product of universes
,
, …,
and
,
, …,
be m - fuzzy subset in
,
, …,
respectively, with Cartesian product
=
…
and
is a function from
to a universe
,
. Then, the extension principle allows to define a fuzzy subset
=
in
by
=
:
,
}, where
![]() | (2) |
and is the inverse image of
For , the extension principle will be:
![]() |
where
![]() | (3) |
Definition (3), 1: Fuzzy Number
A fuzzy number is completely determined by an ordered pair of functions
,
, which satisfy the following requirements :
1) is a bounded left continuous and non decreasing function on
.
2) is a bounded left continuous and non increasing function on
.
3) ,
.
The crisp number a is simply represented by :
![]() |
The set of all the fuzzy numbers is denoted by .
Remark (1), 19: For arbitrary ,
and
, the addition and multiplication by K can be defined as :
1)
![]() | (4) |
2)
![]() | (5) |
3)
![]() | (6) |
4)
![]() | (7) |
For all .
Remark (2), 2:
The distance between two arbitrary fuzzy numbers and
is given as :
![]() | (8) |
Remark (3), 2: is a complete metric space.
Remark (4), 1: The operations of fuzzy numbers (in parametric form) can be generalized from that of crisp intervals. Let us have a look at the operations of intervals. ,
,
and
.
Assuming and
numbers expressed as interval, main operations of intervals are:
1) Addition: .
2) Subtraction: .
3) Multiplication:
![]() |
4) Division: excluding the case
or
.
5) Inverse: excluding the case
or
.
In the case of , multiplication operation can be simplified as:
![]() |
when previous sets and
is defined in the positive real number
, the operations of multiplication, division and inverse are written as :
3') Multiplication: .
4') Division : .
5') Inverse:
Definition (4), 20: Triangular Fuzzy Number
Among the various shapes of fuzzy numbers, triangular fuzzy numbers is the most popular one. A triangular fuzzy number is a fuzzy number represented with three points as follows : , where
.
This representation is interpreted as membership functions :
![]() | (9) |
Now if you get crisp interval by - cut operation, interval
shall be obtained as follows
from:
We get: ,
.
Thus:
![]() | (10) |
which is the parametric form of triangular fuzzy number .
Definition (5), 19: Fuzzy Function
A classical function F: X Y maps from a fuzzy domain
into a fuzzy range
if and only if
,
.
Remark (5), 18:
(1) The function is called a fuzzy function.
(2) We call every function defined in set to
a fuzzy function.
Definition (6), 18: The fuzzy function is said to be continuous if :
For an arbitrary and
there exists a
such that :
![]() |
where is the distance between two fuzzy numbers.
Definition (7), 18: Let be a real interval. The
-level set of the fuzzy function
can be denoted by :
![]() | (11) |
The Seikkala derivative of the fuzzy function
is defined by :
![]() | (12) |
Definition (8), 18: let . If there exist
. such that
, then w is called the H-difference (Hukuhara-difference) of
,
and it is denoted by
In this work the sign stands always for H-difference, and let us remark that
Definition (9), 22, 23: H – Differentiability
Let and
.We say that
is H-differential (Hukuhara-differential) at
, if there exists an element
such that for all
(sufficiently small),
and the limits (in the metric D)
![]() | (13) |
then is called fuzzy derivative (H-derivative) of F at
, where D is the distance between two fuzzy numbers.
It is necessary to note that the definition (9) is the classical definition of the H-derivative (or differentiability in the sense of Hukuhara).
Definition (10), 22, 23: Generalized H – Differentiability
Let and
is differentiable at
, if
(1) there exist an element such that for all
sufficiently small, there are
and the limits (in the metric D )
![]() | (14) |
(in this case, F is called (1)-differentiable)
or
(2) there exist an element such that for all
sufficiently small, there are
and the limits (in the metric D )
![]() | (15) |
(in this case, F is called (2)-differentiable)
Where the relation (1) is the classical definition of the H-derivative.
Theorem (1): Let be a function and denote
for each
. Then
(i) If F is differentiable in the first form (1) of definition (10), then and
are differentiable functions and
![]() |
(ii) If F is differentiable in the second form (2) of definition (10), then and
are differentiable functions and
![]() |
Proof: see 22.
Artificial neural networks (ANNs) are learning machines that can learn any arbitrary functional mapping between input and output. They are fast machines and can be implemented in parallel, either in software or in hardware. In fact, the computational complexity of ANN is polynomial in the number of neurons used in the network. Parallelism also brings with it the advantages of robustness and fault tolerance. (i.e.) ANN is a simplified mathematical model of the human brain. It can be implemented by both electric elements and computer software. It is a parallel distributed processor with large numbers of connections It is an information processing system that has certain performance characters in common with biological neural networks. ANN has been developed as generalizations of mathematical models of human cognition or neural biology, based on the assumptions:
1) Information processing occurs at many simple elements called neurons that is fundamental to the operation of ANNs.
2) Signals are passed between neurons over connection links.
3) Each connection link has an associated weight which, in a typical neural net, multiplies the signal transmitted.
4) Each neuron applies an activation function (usually nonlinear) to its net input (sum of weighted input signals) to determine its output signal.
Note: The units in a network are organized into a given topology by a set of connections, or weights, shown as lines in a diagram.
3.1. Characterize of Artificial Neural Network [10]ANN is Characterized by:
1) Architecture: it is pattern of connections between the neurons.
2) Training Learning Algorithm: it is method of determining the weights on the connections.
3) Activation function: The output of a neuron depends on the neuron's input and on its activation function.
3.2. Typical Architecture of ANN [10]ANNs are often classified as single layer or multilayer. In determining the number of layers, the input units are not counted as a layer, because they perform no computation. Equivalently, the number of layers in the net can be defined to be the number of layers of weighted interconnects links between the slabs of neurons. This view is motivated by the fact that the weights in a net contain extremely important information.
3.3. The Bias [21]In sections (3.4) and (3.5), we describe the main implementation of the back-propagation algorithm for multi-layer feed forward neural network (FFNN). The most implementations of this algorithm employ an additional class of weights known as biases (Figure 1). Biases are values that are added to the sums calculated at each node(except input nodes) during the feed-forward phase. The negative of a bias is sometimes called a threshold. For simplicity, biases are commonly visualized simply as values associated with each node in the intermediate and output layers of a network, but in practice are treated in exactly the same manner as other weights, with all biases simply being weights associated with vectors that lead from a single node whose location is outside of the main network.
In a layered neural network the neurons are organized in the form of layers. We have at least two layers: an input and an output layer. The layers between the input and the output layer (if any) are called hidden layers, whose computation nodes are correspondingly called hidden neurons or hidden units. Extra hidden neurons raise the network's ability to extract higher-order statistics from (input) data. The source nodes in the input layer of the network supply respective elements of the activation pattern (input vector), which constitute the input signals applied to the neurons (computation nodes) in the second layer (i.e., the first hidden layer). The output signals of the second layer are used as inputs to the third layer, and so on for the rest of the network. A layer of nodes projects onto the next layer of the neurons (computation nodes), but not vice versa. In other words, this network is a feed forward neural network (Figure 1). i.e., when any output of the neurons is input of neurons of the same level or preceding levels, the network is described as feed forward, if there is at least one connected exit as entrance of neurons of previous levels or of the same level, including themselves, the network is denominated of feedback. The feedback networks that have at least a closed loop of back propagation are called recurrent. The neurons in each layer of the network have as their inputs the output signals of the preceding layer only. The set of output signals of the neurons in the output (final) layer of the network constitutes the overall response of the network to the activation pattern supplied by the source nodes in the input (first) layer. The ANN is said to be totally connected in the sense that every node in each layer of the network is connected to every other node in the adjacent forward layer, otherwise the network is called partially connected. In this work, totally connected multilayer FFNN is used.
3.5. Back propagation Training Algorithm [21]Training a network by back propagation involves three stages:
1) The feed forward of the input training pattern.
2) The back propagation of the associated error.
3) The adjustment of the weights.
The term back propagation refers to the process by which derivatives of the neural network error with respect to the neural network weights and biases can be computed. This process can be used with a number of different optimization strategies. In another word the standard back propagation is based on the gradient descent, back propagation also known as the Generalized Delta Rule. It is the most widely used supervised training algorithm for ANN. Back propagation is a well-known training method for the multilayer FFNN and it has many industrial applications in function approximation, pattern association, and pattern classification. Because of its importance, we will discuss it in some detail
3.6. Activation Function [21]The activation function (sometimes called a transfer function) can be a linear or nonlinear function. There are many different types of activation functions. Selection of one type over another depends on the particular problem that the neuron (or ANN) is to solve. The activation function denoted by defines the output of a neuron, which is bounded monotonically increasing, differentiable and satisfies:
and
.
The sigmoid function, is by far the most common form of activation function used in construction of ANNs. An example of the sigmoid function is the logistic function defined the range from 0 to 1, an important feature of the sigmoid function that it is differentiable.
It is sometimes desirable to have the activation function range from -1 to 1 allowing an activation function of the sigmoid type to assume negative values, for example, the hyperbolic tangent function which is smooth function.
During this work, we take =
as an activation function, depending on the results of 21 which evidence that an transfer function
enables the training algorithm to learn faster.
Theorem (2): The World Approximation Builder
The multi-Layer perceptron (MLP) network with one hidden Layer with a sigmoid functions in the middle layer and linear transformation functions in output layer are able to approximate all functions in any degree of the integral of the square. (see 3).
A fuzzy differential equation of the first order is in the form:
![]() | (16) |
with the fuzzy initial condition =
, where
is a fuzzy function of
and
is a fuzzy function of the crisp variable
and the fuzzy variable
while
is the fuzzy derivative (If we consider
in the second form (2) of definition (10) According to our proposed method ) of
and
is a fuzzy number
It is clear that the fuzzy function is the mapping F: R
18.
Now it is possible to replace (16) by the following equivalent system:
![]() | (17) |
where
![]() | (18) |
The parametric form of system (17) is given by:
![]() | (19) |
where and
. Now with a discretization of the interval
, a set of points
,
are obtained. Thus for an arbitrary
, the system (19) can be rewritten as:
![]() | (20) |
with the initial conditions:
.
In this work, the function approximation capabilities of feed-forward neural networks is used by expressing the trial solution for the system (19) as the sum of two terms (see eq22). The first term satisfies the initial conditions /boundary conditions and contains no adjustable parameters. The second term involves a feed-forward neural network to be trained so as to satisfy the fuzzy differential equations. Since it is known that a multilayer perceptron with one hidden layer can approximate any function to arbitrary accuracy, the multilayer perceptron is used as the type of the network architecture.
If is a trial solution for the first equation in system (19) and
is a trial solution for the second equation in system (19) where
and
are adjustable parameters.
Indeed, and
are approximation of
and
respectively, then a discretize version of the system (19) can be converted to the following optimization problem:
![]() | (21) |
(Here contains all adjustable parameters ) subject to the initial conditions:
,
.
Each trial solution and
employs one feed-forward neural network for which the corresponding networks are denoted by
and
with adjustable parameters
and
respectively. The trial solutions
and
should satisfy the initial conditions, and the networks must be trained to satisfy the differential equations. Thus
and
can be chosen as follows:
![]() | (22) |
where and
are single-output feed-forward neural network with adjustable parameters
and
respectively. Here
and
are the network inputs. It is easy to see that in (22),
and
satisfy the initial conditions.
Thus the corresponding error function that must be minimized over all adjustable neural network parameters will be:
![]() | (23) |
where s are points in
.
For solving FDE which described in this subsection we use two ANNs, each network is of dimension : two input units
and
, one hidden layer with m units and one linear output unit.
For every entries and
the input neurons makes no changes in its input, so the inputs to the hidden neurons are :
![]() | (24) |
and
are the weight parameters from the input layer to the
unit in the hidden layer in the first network,
and
are the weight parameters from the input layer to the
unit in the hidden layer in the second network,
and
are the
weight biases for the
units in the hidden layers in the first and second network.
The outputs in the hidden neurons are:
![]() | (25) |
The output neurons makes no changes in its inputs, so the inputs to the output neurons are equal to outputs:
![]() | (26) |
where and
are the weight parameters from the
units in the hidden layers to the output layer in the first and second network.
The solution of the fuzzy differential equation (16) is depend on the choice of the derivative (in the first form or in the second form of definition (10).
Let us explain the proposed method, if we denote
![]() |
and
![]() | (27) |
we have the following results :
Case I. If we consider in the first form (1) of definition (10), then we have to solve the following system of ODEs
![]() |
Case II. If we consider in the second form (2) of definition (10) then we have to solve the following system of ODEs
![]() |
The existence and uniqueness of the two solutions (for problem (16)) which described above are given by the following theorem
Theorem (3): Let be a continuous fuzzy function such that there exists
such that
for all
and
then the problem (16) has two solutions (one (1)-differentiable and the other one (2)-differentiable) on
where
Proof: see 23.
To illustrate how we can find the two solutions for a fuzzy differential equation under generalized H-derivation, we present the following example :
Consider the fuzzy initial value problem
![]() |
(1) According to subsection (4.2), Case I., after reducing the above problem, we have the following system of ODEs
![]() |
Which gives the following fuzzy analytical solution
![]() |
(2) According to subsection (4.2), Case II., after reducing the above problem, we have the following system of ODEs
![]() |
Which gives the following fuzzy analytical solution
![]() |
To show the behavior and properties of the proposed method, one problem will be solved in this section. We have used a multilayer perceptron having one hidden layer with ten hidden units and one output unit. The activation function of each hidden unit is hyperbolic tangent activation function. The analytical solutions and
have been known in advance. Therefore, we test the accuracy of the obtained solutions by computing the deviation (absolute error):
![]() |
Where and
are the trial solutions.
In order to obtain better results, more hidden units or training points may be used. To minimize the error function we have used BFGS quasi-Newton method (For more details, see 21).
Example (1): Consider the following fuzzy initial value problem:
![]() |
, where
.
The analytical solution (According to subsection(4.2), Case II. ) for this problem are :
![]() |
The trial solution (According to the proposed method in this work) for this problem are :
![]() |
The ANN trained using a grid of ten equidistant points in .
The error function that must be minimized for this problem will be :
![]() | (28) |
Then we use (28) to update the weights and biases. analytical and trial solutions for this problem can be found in Table 1 and Table 2.
In this paper, we have presented numerical method based on artificial neural network for solving first order fuzzy initial value problem under generalized H-derivation. The method which we have used allows us to translate the FDE into system of ODEs and then solve this system. we demonstrate the ability of ANN to approximate the solution of FDEs. Therefore, we can conclude that the method which we proposed can handle effectively all types of FDEs and provide accurate approximate solution throughout the whole domain and not only at the training set. As well, one can use the interpolation techniques to find the approximate solution at points between the training points or at points outside the training set. Further research is in progress to apply and extend this method to solve higher order FDEs.
[1] | Mosleh M., Otadi M, “Simulation and Evaluation of Fuzzy Differential Equations by Fuzzy Neural Network”, Applied Soft Computing, 12, 2817-2827, 2012. | ||
In article | View Article | ||
[2] | Buckley J. J., Feuring T., “Fuzzy Differential Equations”, Fuzzy Sets and Systems, 110, 69-77, 2000. | ||
In article | View Article | ||
[3] | Lee H., Kang I. S., “Neural Algorithms For Solving Differential Equations”, Journal of Computational Physics, 91, 110-131,1990. | ||
In article | View Article | ||
[4] | Meade A. J., Fernandes A. A., “The Numerical Solution of Linear Ordinary Differential Equations by Feed-Forward Neural Networks”, Mathematical and Computer Modelling, Vol. 19, No. 12 , 1-25, 1994. | ||
In article | View Article | ||
[5] | Meade A. J., Fernandes A. A., “Solution of Nonlinear Ordinary Differential Equations by Feed-Forward Neural Networks”, Mathematical and Computer Modelling, Vol. 20, No. 9, 19-44, 1994. | ||
In article | View Article | ||
[6] | Lagaris I. E., Likas A., et al., “Artificial Neural Networks For Solving Ordinary and Partial Differential Equations”, Journal of Computational Physics, 104, 1-26, 1997. | ||
In article | View Article | ||
[7] | Lagaris I. E., Likas A., et al., “Artificial Neural Networks For Solving Ordinary and Partial Differential Equations”, IEEE Transaction on Neural Networks, Vol. 9, No. 5, 987-1000, 1998. | ||
In article | View Article PubMed | ||
[8] | Liu B., Jammes B., “Solving Ordinary Differential Equations by Neural Networks”, Warsaw, Poland, 1999. | ||
In article | |||
[9] | Alli H., Ucar A., et al., “The Solutions of Vibration Control Problems Using Artificial Neural Networks”, Journal of the Franklin Institute, 340, 307-325, 2003. | ||
In article | View Article | ||
[10] | Tawfiq L. N. M., “On Design and Training of Artificial Neural Network For Solving Differential Equations”, Ph.D. Thesis, College of Education Ibn AL-Haitham, University of Baghdad, Iraq, 2004. | ||
In article | |||
[11] | Malek A., Shekari R., “Numerical Solution For High Order Differential Equations by Using a Hybrid Neural Network Optimization Method”, Applied Mathematics and Computation, 183, 260-271, 2006. | ||
In article | View Article | ||
[12] | Pattanaik S., Mishra R. K., “Application of ANN For Solution of PDE in RF Engineering”, International Journal on Information Sciences and Computing, Vol. 2, No. 1, 74-79, 2008. | ||
In article | View Article | ||
[13] | Baymani M., Kerayechian A., et al., “Artificial Neural Networks Approach For Solving Stokes Problem”, Applied Mathematics, 1, 288-292, 2010. | ||
In article | View Article | ||
[14] | Oraibi Y. A., “Design Feed-Forward Neural Networks For Solving Ordinary Initial Value Problem”, M.Sc. Thesis, College of Education Ibn Al-Haitham, University of Baghdad, Iraq, 2011. | ||
In article | |||
[15] | Ali M. H., “Design Fast Feed-Forward Neural Networks to Solve Two Point Boundary Value Problems”, M.Sc. Thesis, College of Education Ibn Al-Haitham, University of Baghdad, Iraq, 2012. | ||
In article | |||
[16] | Hussein A. A. T., “Design Fast Feed-Forward Neural Networks to Solve Singular Boundary Value Problems”, M.Sc. Thesis, College of Education Ibn Al-Haitham, University of Baghdad, Iraq, 2013. | ||
In article | View Article | ||
[17] | Tawfiq L. N. M., Al-Abrahemee K. M. M., “Design Neural Network to Solve Singular Perturbation Problems”, Applied and Computational Mathematics, Vol. 3, No. 3, 1-5, 2014. | ||
In article | |||
[18] | Effati S., Pakdaman M., “Artificial Neural Network Approach For Solving Fuzzy Differential Equations”, Information Sciences, 180, 1434-1457, 2010. | ||
In article | View Article | ||
[19] | Mosleh M., Otadi M., “Fuzzy Fredholm Integro-Differential Equations with Artificial Neural Networks”, Communications in Numerical Analysis, Article ID cna-00128, 1-13, 2012. | ||
In article | View Article | ||
[20] | Ezadi S., Parandin N., et al., “Numerical Solution of Fuzzy Differential Equations Based on Semi-Taylor by Using Neural Network”, Journal of Basic and Applied Scientific Research, 3(1s), 477-482, 2013. | ||
In article | |||
[21] | Suhhiem M. H., “Fuzzy Artificial Neural Network For Solving Fuzzy and Non-Fuzzy Differential Equations”, Ph.D. Thesis, College of Sciences, AL-Mustansiriyah University, Iraq, 2016. | ||
In article | |||
[22] | Cano Y. C., Flores H. R., “On New Solutions of Fuzzy Differential Equations”, Chaos, Solitons and Fractals, 38, 112-119, 2008. | ||
In article | View Article | ||
[23] | Cano Y. C., Flores H. R., et al., “Fuzzy Differential Equations with Generalized Derivative”, Fuzzy Sets and Systems, 160, 1517-1527, 2008. | ||
In article | View Article | ||
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit
https://creativecommons.org/licenses/by/4.0/
[1] | Mosleh M., Otadi M, “Simulation and Evaluation of Fuzzy Differential Equations by Fuzzy Neural Network”, Applied Soft Computing, 12, 2817-2827, 2012. | ||
In article | View Article | ||
[2] | Buckley J. J., Feuring T., “Fuzzy Differential Equations”, Fuzzy Sets and Systems, 110, 69-77, 2000. | ||
In article | View Article | ||
[3] | Lee H., Kang I. S., “Neural Algorithms For Solving Differential Equations”, Journal of Computational Physics, 91, 110-131,1990. | ||
In article | View Article | ||
[4] | Meade A. J., Fernandes A. A., “The Numerical Solution of Linear Ordinary Differential Equations by Feed-Forward Neural Networks”, Mathematical and Computer Modelling, Vol. 19, No. 12 , 1-25, 1994. | ||
In article | View Article | ||
[5] | Meade A. J., Fernandes A. A., “Solution of Nonlinear Ordinary Differential Equations by Feed-Forward Neural Networks”, Mathematical and Computer Modelling, Vol. 20, No. 9, 19-44, 1994. | ||
In article | View Article | ||
[6] | Lagaris I. E., Likas A., et al., “Artificial Neural Networks For Solving Ordinary and Partial Differential Equations”, Journal of Computational Physics, 104, 1-26, 1997. | ||
In article | View Article | ||
[7] | Lagaris I. E., Likas A., et al., “Artificial Neural Networks For Solving Ordinary and Partial Differential Equations”, IEEE Transaction on Neural Networks, Vol. 9, No. 5, 987-1000, 1998. | ||
In article | View Article PubMed | ||
[8] | Liu B., Jammes B., “Solving Ordinary Differential Equations by Neural Networks”, Warsaw, Poland, 1999. | ||
In article | |||
[9] | Alli H., Ucar A., et al., “The Solutions of Vibration Control Problems Using Artificial Neural Networks”, Journal of the Franklin Institute, 340, 307-325, 2003. | ||
In article | View Article | ||
[10] | Tawfiq L. N. M., “On Design and Training of Artificial Neural Network For Solving Differential Equations”, Ph.D. Thesis, College of Education Ibn AL-Haitham, University of Baghdad, Iraq, 2004. | ||
In article | |||
[11] | Malek A., Shekari R., “Numerical Solution For High Order Differential Equations by Using a Hybrid Neural Network Optimization Method”, Applied Mathematics and Computation, 183, 260-271, 2006. | ||
In article | View Article | ||
[12] | Pattanaik S., Mishra R. K., “Application of ANN For Solution of PDE in RF Engineering”, International Journal on Information Sciences and Computing, Vol. 2, No. 1, 74-79, 2008. | ||
In article | View Article | ||
[13] | Baymani M., Kerayechian A., et al., “Artificial Neural Networks Approach For Solving Stokes Problem”, Applied Mathematics, 1, 288-292, 2010. | ||
In article | View Article | ||
[14] | Oraibi Y. A., “Design Feed-Forward Neural Networks For Solving Ordinary Initial Value Problem”, M.Sc. Thesis, College of Education Ibn Al-Haitham, University of Baghdad, Iraq, 2011. | ||
In article | |||
[15] | Ali M. H., “Design Fast Feed-Forward Neural Networks to Solve Two Point Boundary Value Problems”, M.Sc. Thesis, College of Education Ibn Al-Haitham, University of Baghdad, Iraq, 2012. | ||
In article | |||
[16] | Hussein A. A. T., “Design Fast Feed-Forward Neural Networks to Solve Singular Boundary Value Problems”, M.Sc. Thesis, College of Education Ibn Al-Haitham, University of Baghdad, Iraq, 2013. | ||
In article | View Article | ||
[17] | Tawfiq L. N. M., Al-Abrahemee K. M. M., “Design Neural Network to Solve Singular Perturbation Problems”, Applied and Computational Mathematics, Vol. 3, No. 3, 1-5, 2014. | ||
In article | |||
[18] | Effati S., Pakdaman M., “Artificial Neural Network Approach For Solving Fuzzy Differential Equations”, Information Sciences, 180, 1434-1457, 2010. | ||
In article | View Article | ||
[19] | Mosleh M., Otadi M., “Fuzzy Fredholm Integro-Differential Equations with Artificial Neural Networks”, Communications in Numerical Analysis, Article ID cna-00128, 1-13, 2012. | ||
In article | View Article | ||
[20] | Ezadi S., Parandin N., et al., “Numerical Solution of Fuzzy Differential Equations Based on Semi-Taylor by Using Neural Network”, Journal of Basic and Applied Scientific Research, 3(1s), 477-482, 2013. | ||
In article | |||
[21] | Suhhiem M. H., “Fuzzy Artificial Neural Network For Solving Fuzzy and Non-Fuzzy Differential Equations”, Ph.D. Thesis, College of Sciences, AL-Mustansiriyah University, Iraq, 2016. | ||
In article | |||
[22] | Cano Y. C., Flores H. R., “On New Solutions of Fuzzy Differential Equations”, Chaos, Solitons and Fractals, 38, 112-119, 2008. | ||
In article | View Article | ||
[23] | Cano Y. C., Flores H. R., et al., “Fuzzy Differential Equations with Generalized Derivative”, Fuzzy Sets and Systems, 160, 1517-1527, 2008. | ||
In article | View Article | ||