Introduction
Within our platform we use a number of different visualisations and terms. For certain terms we felt it would be helpful to provide you with a definition and a description of how we use them within our platform. The platform also contains tool tips which provide explanations and these are shown with a ? sitting inside either a white or grey bubble depending on whether you are using light or dark mode.
Averages
- Description: We use mean averages within our platform which means we add together all of your responses and then divide this by the number of responses to give us an average score.
- How we use Averages within our platform: We use Averages within the platform to provide headline scores for Happiness, Engagement, our brain systems, neuroscience themes, surveys and questions.
Average Magnitude
- Definition: The Average Magnitude shows the overall strength of emotion (both positive and negative) within comments received on our platform. Each expression of emotion contributes to the magnitude score, so longer comments are likely to have a greater magnitude score.
- How we use Average Magnitude within our platform: In our sentiment analysis we use the average magnitude to measure the strength of emotion within the comments you have received.
Average Sentiment
- Definition: Average Sentiment helps you judge the typical sentiment of the comments you received. The sentiment is calculated within our platform using artificial intelligence, which analyses the emotional content of the comments received.
- How we use Average Sentiment within our platform: Our platform has a sentiment analysis insights page which analyses your comments and associates a sentiment score to the comment. The score ranges between -1.0 being very negative and 1.0 being very positive, with neutral comments scoring between -0.25 and 0.25.
Benchmarks
- Definition: A Benchmark is a point of reference to provide context for your scores. Generally, this looks at averages. You can benchmark both internally and externally.
- How we use Benchmarks within our platform: Within our platform we offer both internal and external benchmarks which provides you with additional context around your performance.
Completion Rates
- Definition: Completion Rates are the percentage of people who started and completed your survey.
- How we use Completion Rates within our platform: We use Completion Rates as an indication of how engaging a survey is. For example, a high Completion Rate would indicate people were engaged with the survey and the survey wasn’t too long.
Correlation Analysis
- Description: Correlation is when two types of data are closely tied together. For example, tall people tend to wear a larger shoe size.
- How we use Correlation Analysis within our platform: We use Correlation Analysis as a way of understanding the relationship between different questions within your survey. For example, if we run a Correlation Analysis for ‘Overall, how happy are you at work and please tell us why?’ it may say the question ‘How well does your organisation keep you informed?’ has a high correlation. This means if you focused attention on keeping people informed and improved this score it is likely you will also boost the score for the happiness question.
Drop Out Rate
- Definition: Drop Out Rates are the percentage of people who started, but did not complete the survey.
- How we use Drop Out Rate within our platform: A high Drop Out Rate is a good indication that your respondents are not engaged in the survey. For example, they may not feel it is relevant to them or the survey was too long and they exited before completing the survey in full.
Entities
- Definition: The AI we use to analyse comments has the ability to look at the words being used and group them into themes. These themes are called ‘entities’.
- How we use Entities within our platform: The AI may detect comments using the words ‘boss’, ‘leader’ and ‘supervisor’ and because it recognises these are words are related to the theme of management it will analyse these words together as a group and provide a sentiment score for them.
Filters
- Definition: A filter is a way of being able to break down your results. Examples of a filter would be Department, Location, etc.
- How we use Filters within our platform: We use filters within our platform to provide a greater level of insight, which means customers can be more specific in their analysis and action planning.
Filter comparison
- Definition: A filter comparison allows you to compare the results of your chosen filters against each other in one graph.
- How we use Filter comparisons within our platform: By being able to see the comparative performance of their filters such as Departments, organisations are able to see which Departments are performing well, acknowledge this performance and share the best practice from these Departments with others in order to boost their scores.
Heatmaps
- Description: A Heatmap is a data visualisation method which uses colour to help you understand relative performance.
- How we use Heatmaps within our platform: Our platform enables you to see – at a glance – how well your teams or groups within your organisation are performing against one another. Dark green represents high scores and dark red represents low scores.
Percentage Favourable
- Definition: Percentage Favourable looks at how many questions were answered above a certain threshold. Scores above this threshold would be deemed favourable. A calculation is run to convert those scores above the threshold into a percentage of all the scores you received.
- How we use Percentage Favourable within our platform: Within our platform we consider any score of 7 or above to be favourable. We use the Percentage Favourable insight within our reporting.
Response Rates
- Definition: The percentage of people who have responded to your survey. For example, if you invite 1,000 people to your survey and 700 people respond, your Response Rate will be 70%.
- How we use Response Rates within our platform: We use Response Rates as a way of demonstrating how representative the results you’ve received are. For example, if you have a Response Rate of 70% you can feel confident the results you have are a fair representation of how your entire organisation feels.
Standard Deviation
- Definition: Standard Deviation allows you to see how consistent the responses you’ve received are. If the Standard Deviation is 1 then your respondents felt the same about a particular question. The higher the Standard Deviation the more variety there is in your responses.
- How we use Standard Deviation within our platform: We use Standard Deviation as a way to indicate whether your survey respondents have replied consistently or are polarised on a question.
Score Distribution
- Definition: The Score Distribution shows the number of times each score within your rating scale is chosen by a respondent.
- How we use Score Distribution within our platform: We use the Score Distribution to give an indication of how similar or not the responses to your survey are.