[Author Name(s), First M. Last, Omit Titles and Degrees]
[Include any grant/funding information and a complete correspondence address.]
Feed-forward networks or Multi-Layer Perceptrons (MLP) have an individual layered structure, where all the connections feed-forward from input to the output, and these networks are more prominent in Data classification. In a Neural Network, error functions are used to define the performance of the network through frequent preparation and implementation. The iterative training algorithms use the derivative of the error functions. Error functions are used for Data Classification after analyzing their mathematical properties, and some of the common error functions are Mean Square Error (MSE) function, and Cross-Entropy (CE) cost function. The MSE error function “E(y, t)” is the well-known function that is most widely used; however, it is not considered the appropriate error function to solve data classification problems.
A new learning standard was introduced with a focus on MLP Data classification; the Zero-Error Density Maximization (Z-EDM), and two parameterized error function ESMF, and EEXP. (Z-EDM) algorithms use error density at the source as the new cost function that can influence overall enhancement. Introduction of Z-EDM is the result, due to entropic error messages, and can be easily used in the backpropagation structure. ESMF is categorized as monotonic error function, and EEXP is the exponential type error function. EEXP is similar to the other traditional error functions, except that by a single parameter adjustment it can accomplish immeasurable error functions, with altered behavior of the error gradient weight.
MLPs that are trained using EEXP provide the best results as compared with the MSE or CE error functions, and also EEXP is an improvement to the error functions. The back propagation algorithm can use EEXP parameter with minimum mathematical complexity that can be constructive in Data classification. The EEXP parameter still needs further investigation to determine the theoretical and practical aspects of learning rates and mathematical optimization.