Clustering/segmentation is one of the most important techniques used in Acquisition Analytics. K means clustering groups similar observations in clusters in order to be able to extract insights from vast amounts of unstructured data.
When you want to analyze the Facebook/Twitter/Youtube comments of a particular event, it would be impossible to manually look at each and every mention and see where the sentiment regarding a particular brand/event/person lies.
 The basic idea of K Means clustering is to form K seeds first, and then group observations in K clusters on the basis of distance with each of K seeds. The observation will be included in the n ^{th} seed/cluster if the distance betweeen the observation and the n ^{th} seed is minimum when compared to other seeds.
Below is a brief overview of the methodology involved in performing a K Means Clustering Analysis.
The Process of building K clusters on Social Media text data:

The first step is to pull the social media mentions for a particular timeframe using social media listening tools (Radian 6, Sysmos, Synthesio etc.). You would need to build query/add keywords to pull the data from social Media Listening tools.

The next step is data cleansing. This is the most important part as social media comments do not have any specific format. People use locals/slangs etc. on social media to express their emotions, so it's important to be able to see through them and understand the underlying sentiment.

Remove punctuations, numbers, stopwords (R has specific stopword library but you can also create your own list of stopwords). Also, remove duplicate rows or URLs from the social media mentions.

The next step is to create corpus vector of all the words.

Once you have created the corpus vector of words, the next step is to create a document term matrix.
Let’s visualize the problem with one example. Let’s assume that there are 10 documents/mentions and 5 unique words post data cleansing. Below is the document term matrix for this dataset. It shows for how many times one word has appeared in the document. For example, in document 1 (D1), the words online, book and Delhi have each been mentioned once.
Document Term Matrix 

Documents 
Online 
Festival 
Book 
Flight 
Delhi 
D1 
1 
0 
1 
0 
1 
D2 
2 
1 
2 
1 
1 
D3 
0 
0 
1 
1 
1 
D4 
1 
2 
0 
2 
0 
D5 
3 
1 
0 
0 
0 
D6 
0 
1 
1 
1 
2 
D7 
2 
0 
1 
2 
1 
D8 
1 
1 
0 
1 
0 
D9 
1 
0 
2 
0 
0 
D10 
0 
1 
1 
1 
1 

Let’s assume that we want to create K=3 clusters. First, three seeds should be chosen. Suppose, D2, D5 & D7 are chosen as initial three seeds.

The next step is to calculate the Euclidean distance of other documents from D2, D5 & D7.

Assuming: U=Online, V= Festival, X=Book, Y=Flight, Z=Delhi. Then the Euclidean distance between D1 & D2 would be:
((U1U2)^2 + (W1W2)^2+(X1X2)^2+ (Y1Y2)^2+(Z1Z2)^2 )^0.5
Distance Matrix 

Distance from 3 clusters 

Documents 
D2 
D5 
D7 
Min. Distance 
Movement 
D1 
2.0 
2.6 
2.2 
2.0 
D2 
D2 
0.0 
2.6 
1.7 
0.0 

D3 
2.4 
3.6 
2.2 
2.2 
D7 
D4 
2.8 
3.0 
2.6 
2.6 
D7 
D5 
2.6 
0.0 
2.8 
0.0 

D6 
2.4 
3.9 
2.6 
2.4 
D2 
D7 
1.7 
2.8 
0.0 
0.0 

D8 
2.6 
2.0 
2.8 
2.0 
D5 
D9 
2.0 
3.0 
2.6 
2.0 
D2 
D10 
2.2 
3.5 
2.4 
2.2 
D2 
Clusters 
# of Observations 
D2 
5 
D5 
2 
D7 
3 

Hence, 10 documents have moved into 3 different clusters. Instead of Centroids, Medoids are formed and again distances are recalculated to ensure that the documents who are closer to a medoid is assigned to the same cluster.

Medoids are used to build the story for each cluster.
But there is still one important question remaining: How do you choose the optimal number of clusters?
One approach would be to use the Elbow method to choose the optimal number of clusters. This is based on plotting the cost function for various number of clusters and identifying the breakpoints. If adding more clusters is not significantly reducing the variance within the cluster, one should stop adding more clusters. Although this method cannot give you the optimal number of clusters as an exact point, it can give you an optimal range.