In today’s world, data analysis consists of several phases that include data collection, preprocessing, clustering and identifying patterns among them. In neural networks, we mainly focus on the clustering and pattern recognitions in data. Clustering refers to the partitioning of a data set into subsets (clusters) so that data in each subset ideally share some common characteristics.
The neural network is based on the learning process of the brain, where learning means to get knowledge by study, experience or being taught. There are two types of learning involved, supervised and unsupervised. Supervised learning involves discovering patterns in the data that relate data attributes with a target (class) attribute. A “teacher” tells the network what the correct output is based on the input until the network learns the target concept. Unsupervised learning does not require a teacher while training the networks. Here, the network learns a prototype based on the distribution of patterns in the training data. The data is explored to find some intrinsic structures in them.
Kohonen’s Self Organizing Maps
Kohonen’s Self Organizing Maps is a network that works on the assumption that class membership is broadly defined as input patterns that share some common features and that the network is able to identify these features. The network tries to mimic the feature of the brain where neurons tend to cluster in groups. The interactions of the neurons within the group are much greater than those outside the group. It considers the physical arrangement (the topology) of these neurons. Neurons that are “close” together are going to interact differently than neurons that are “far” apart. Thus, Kohonen self-organizing maps (SOM) are also known as the topology preserving maps, since a topological structure of the output neurons are assumed, and this structure is maintained during the training process. Each output neuron is referred to as a cluster unit. At each step of the training process, there are competitions among the output neurons (cluster units) with a resulting single winner. The weights connected to the winning neuron, as well as the weights connected to the winning neuron’s first and second neighborhoods are updated in a similar fashion. Neurons in its first and second neighborhoods are labeled the weight matrix. There is a total of M such weight vectors, one for each cluster units. Each one of these weight vectors serves as an exemplar of the input patterns associated with that cluster. Unless some prior information is known about the clusters, these weight vectors are typically initialized to some random values. During the training process, each training vector is cyclically or randomly selected and presented to the network. The cluster unit j0 whose weight vector W.j0 matches the input pattern x the most closely is chosen as the winner. Here two vectors are considered closest if the square of the Euclidean distance between them, kx −W.j0k2 is the smallest. The winning unit and its neighboring units (those located in its first and second neighborhoods) then update their weight vectors according to the Kohonen rule. This process is continued until the weight vectors change by less than a preset amount.
- Initialize M weight vectors. Set topological neighborhood parameters and learning rate.
- For step k = 1, 2,, do steps a - d by cycling through training set until weight vectors converge
- Set input vector x = s(q), one of the training vectors.
- Compute for each cluster unit j = 1,,M, the Euclidean distance
dj = 2
- Find the index j0 such that dj0 is a minimum.
- For all cluster units j within the specified neighborhoods of j0, update the weight vectors
wij(k+1) = wij(k)+ alpha [xi − wij(k)] , i = 1, . . . ,N.
- May reduce the learning rate.
- May reduce the radii that define the topological neighborhoods.
We have been given the training data as follows:
x: 0.91, 0.74, 0.34, 0.42, 0.71, 0.72, 0.63, 0.36, 0.71, 0.36, 0.50, 0.11, 0.39, 0.19, 0.85, 0.46, 0.33, 0.79, 0.32, 0.47, 0.66, 0.46, 0.59, 0.28, 0.09, 0.45, 0.17, 0.34, 0.99, 0.15
y:0.14, 0.97, 0.96, 0.98, 0.87, 0.85, 0.86, 0.77, 0.64, 0.68, 0.78, 0.15, 0.79, 0.15, 0.86, 0.28, 0.49, 0.87, 0.30, 0.55, 0.24, 0.40, 0.32, 0.47, 0.57, 0.29, 0.59, 0.93, 0.31, 0.78
z:-0.57, -0.49, -0.37, -0.29, -0.27, -0.24, -0.14, -0.01, 0.08, 0.13, 0.15, 0.19, 0.22, 0.24, 0.26, 0.27, 0.31, 0.35, 0.40, 0.41, 0.50, 0.55, 0.55, 0.65, 0.67, 0.72, 0.81, 0.88, 0.90, 0.93
Random weights have been assigned to the cluster neurons. Since there are three input vectors, the maximum number of clusters is taken as two, since Kohonen’s SOM is a dimensionality reduction algorithm. We have taken the learning rate, alpha, as 0.6, which decreases iteratively by a factor of 0.5 until it reaches a value of 0.01. The radius of the neighborhood is taken as 1, which is reduced when the learning rate reaches a value of less than 0.02.
The above figure tells us that input vectors 1 and 2 (x and y) belong to cluster 2 while the input vector 3 belongs to cluster 1.
Modifying Input vector units
Adding another input vector maintains the cluster stability:
However, removing the Input Vector 3 results in the following scenario
When we modify the rate at which the parameter alpha decreases or the point at which the radius should decrease, the input patterns get clustered in the same way, even if the cluster names or positions are different.
Beale, R and T Jackson. Neural Computing - An Introduction. CRC Press, 1990.
Bullinaria, John A. "Self Organizing Maps: Fundamentals." 2004. <http://www.cs.bham.ac.uk/~jxb/NN/l16.pdf>.
Haykin, Simon, ed. Kalman Filtering and Neural Networks. New York: John Wiley & Sons, 2004.
Hinton, Geoffrey E. and Terrence Joseph Sejnowski. Unsupervised Learning: Foundations of Neural Computation. MIT Press, 1998.
Leung, K. Ming. "NYU POLYTECHNIC UNIVERSITY - Kohonen Self-Organized Maps." 2007. <http://cis.poly.edu/~mleung/CS6673/s09/SOM.pdf>.
S.N.Sivanandam and S.N.Deepa. Principles of Soft Computing. 1st. New Delhi, India: Wiley India (P) Ltd, 2007.
Seiffert, Udo and L. C. Jain, Self-organizing Neural Networks: Recent Advances and Applications. New York: Springer, 2002.
Zurada, Jacek M. Introduction to Atrificial Neural Systems. 1st. Mumbai: Jaico Publishing House, 1994.