Social media analytics
Social media analytics or social media monitoring is the process of gathering and analyzing data from social networks such as Facebook, Instagram, LinkedIn, or Twitter. A part of social media analytics is called social media monitoring or social listening. It is commonly used by marketers to track online conversations about products and companies. One author defined it as "the art and science of extracting valuable hidden insights from vast amounts of semi-structured and unstructured social media data to enable informed and insightful decision-making."
Process
There are three main steps in analyzing social media: data identification, data analysis, and information interpretation. To maximize the value derived at every point during the process, analysts may define a question to be answered. The important questions for data analysis are: "Who? What? Where? When? Why? and How?" These questions help in determining the proper data sources to evaluate, which can affect the type of analysis that can be performed. To make it easier to track social media analytics, purpose built tools such as Hootsuite, Sprout Social, Later, and Buffer, , and native analytics tools from social media platforms like Facebook Insights, Twitter Analytics, and Instagram Insights have been created to help companies consolidate analytics into one place.Data identification
Data identification is the process of identifying the subsets of available data to focus on for analysis. Raw data is useful once it is interpreted. After data has been analyzed, it can begin to convey a message. Any data that conveys a meaningful message becomes information. On a high level, unprocessed data takes the following forms to translate into exact message: noisy data; relevant and irrelevant data, filtered data; only relevant data, information; data that conveys a vague message, knowledge; data that conveys a precise message, wisdom; data that conveys exact message and reason behind it. To derive wisdom from an unprocessed data, we need to start processing it, refine the dataset by including data that we want to focus on, and organize data to identify information. In the context of social media analytics, data identification means "what" content is of interest. In addition to the text of content, we want to know: who wrote the text? Where was it found or on which social media venue did it appear? Are we interested in information from a specific locale? When did someone say something in social media?Attributes of data that need to be considered are as follows:
- Structure: Structured data is a data that has been organized into a formatted repository - typically a database - so that its elements can be made addressable for more effective processing and analysis. The unstructured data, unlike structured data, is the least formatted data.
- Language: Language becomes significant if we want to know the sentiment of a post rather than number of mentions.
- Region: It is important to ensure that the data included in the analysis is only from that region of the world where the analysis is focused on. For example, if the goal is to identify the clean water problems in India, we would want to make sure that the data collected is from India only.
- Type of Content: The content of data could be Text, Photos, Audio, or Videos.
- Venue: Social media content is getting generated in a variety of venues such as news sites and social networking sites. Depending on the type of project the data is collected for, the venue becomes very significant.
- Time: It is important to collect data posted in the time frame that is being analyzed.
- Ownership of Data: Is the data private or publicly available? Is there any copyright in the data? These are the important questions to be addressed before collecting data.
Data analysis
Developing a data model is a process or method that we use to organize data elements and standardize how the individual data elements relate to each other. This step is important because we want to run a computer program over the data; we need a way to tell the computer which words or themes are important and if certain words relate to the topic we are exploring.
In the analysis of our data, it's handy to have several tools available at our disposal to gain a different perspective on discussions taking place around the topic. The aim here is to configure the tools to perform at peak for a particular task. For example, thinking about a word cloud, if we take a large amount of data around computer professionals, say the "IT architect", and built a word cloud, no doubt the largest word in the cloud would be "architect". This analysis is also about tool usage. Some tools may do a good job at determining sentiment, where as others may do a better job at breaking down text into a grammatical form that enables us to better understand the meaning and use of various words or phrases. In performing analytic analysis, it is difficult to enumerate each and every step to take on an analytical journey. It is very much an iterative approach as there is no prescribed way of doing things.
The taxonomy and the insight derived from that analysis are as follows:
- Depth of Analysis: Simple descriptive statistics based on streaming data, ad hoc analysis on accumulated data or deep analysis performed on accumulated data. This analysis dimension is really driven by the amount of time available to come up with the results of a project. This can be considered as a broad continuum, where the analysis time ranges from few hours at one end to several months at the other end. This analysis can answer following type of questions:
- * How many people mentioned Wikipedia in their tweets?
- * Which politician had the highest number of likes during the debate?
- * Which competitor is gathering the most mentions in the context of social business?
- Machine Capacity: The amount of CPU needed to process data sets in a reasonable time period. Capacity numbers need to address not only the CPU needs but also the network capacity needed to retrieve data. This analysis could be performed as real-time, near real-time, ad hoc exploration and deep analysis. Real-time analysis in social media is an important tool when trying to understand the public's perception of a certain topic as it unfolding to allow for reaction or an immediate change in course. In near real-time analysis, we assume that data is ingested into the tool at a rate that is less than real-time. Ad hoc analysis is a process designed to answer a single specific question. The product of ad hoc analysis is typically a report or data summary. A deep analysis implies an analysis that spans a long time and involves a large amount of data, which typically translates into a high CPU requirement.
- Domain of Analysis: The domain of the analysis is broadly classified into external social media and internal social media. Most of the time when people use the term social media, they mean external social media. This includes content generated from popular social media sites such as Twitter, Facebook and LinkedIn. Internal social media includes enterprise social network, which is a private social network used to assist communication within business.
- Velocity of Data: The velocity of data in social media can be divided into two categories: data at rest and data in motion. Dimensions of velocity of data in motion can answer questions such as: How the sentiment of the general population is changing about the players during the course of match? Is the crowd conveying positive sentiment about the player who is actually losing the game? In these cases, the analysis is done as arrives. In this analysis, the amount of detail produced is directly correlated to the complexity of the analytical tool or system. A highly complex tool produces more amounts of details. The second type of analysis in the context of velocity is an analysis of data at rest. This analysis is performed once the data is fully collected. Performing this analysis can provide insights such as: which of your company's products has the most mentions as compared to others? What is the relative sentiment around your products as compared to a competitor's product?
Information interpretation
The best visualizations are ones that expose something new about the underlying patterns and relationships contain the data. Exposure of the patterns and understating them play a key role in decision making process. Mainly there are three criteria to consider in visualizing data.
- Understand the audience: before building the visualization, set up a goal, which is to convey great quantities of information in a format that is easily assimilated by the consumer of information. It is important to answer "Who is the audience?", and "Can you assume the audience has the knowledge of terminologies used?" An audience of experts will have different expectations than a general audience; therefore, the expectations have to be considered.
- Set up a clear framework: the analyst needs to ensure that the visualization is syntactically and semantically correct. For example, when using an icon, the element should bear resemblance to the thing it represents, with size, color, and position all communicating meaning to the viewer.
- Tell a story: analytical information is complex and difficult to assimilate, thus, the goal of visualization is to understand and make sense of the information. Storytelling helps the viewer gain insight from the data. Visualization should package information into a structure that is presented as a narrative and easily remembered. This is important in many scenarios when the analyst is not the same person as a decision-maker.