Properly monitoring end-user experience on SaaS platforms is a highly advanced topic. But we know that understanding the results should not be. So, we provide results in the form of an easy-to-understand score. Think of it like a test score in school – 100 is the best score you can get, and 0 is the worst.
How Should I Read the Scores?
Our scores come out of the box with a few built-in ranges.
- Good
- Scores between 85+ – 100
- Color Coded Green in status widgets in the portal
- Users would report that the platform is running normally when scores are in the good range.
- Statistically speaking, a score of 87 is worse but the user is not likely to notice.
- Impacted
- Scores between 50+ – 85
- Color Coded Yellow in status widgets in the portal
- The average user would “perceive” the monitored SaaS platform is running slowly, but the slowness is probably not enough to prevent the user from functioning.
- Degraded
- Scores between 0 – 50
- Color Coded Red in status widgets in the portal
- User experience is impacted severely enough that users would be unable to function normally.
How do we score?
There are many factors that going into our scoring mechanism. First of all, we don’t just connect to M365 and say it Is ok.
Being able to ‘reach’ a SaaS service doesn’t mean that service is satisfactorily working for your users. So, our score is the product of multiple real user simulations that we do continuously from the users context, culminating in three distinctive scores that help us identify and isolate potential cause areas:
- Authentication: Can users successfully authenticate and stay authenticated to the SaaS service?
- Networking: Can users successfully and satisfactory reach the SaaS service?
- API: Can users successfully and satisfactory interact with the SaaS service?
We do this for each of the monitored M365 workloads / services and each user individually and then compare that against what is expected as ‘normal’.
What do we mean with ‘expected normal’?
To calculate the expected normal we look at the user’s individual situation and expectations. A user’s network experience expectations while working in an office with a 100Gb LAN connection are probably a little different from that of their home office with a 10MB Wi-Fi network.
You also need to account for the fact that the expected ‘normal’ can depend on time of day and day of week. For instance, Monday morning when everyone logs in at the same time, the authentication is likely a little slower for everyone and therefor shouldn’t be compared to Friday afternoon. The user might not even notice this, but a normal monitoring tool could and probably would give off a false alert if you don’t account for those specific time related situations.
And similar for all the different services. Each workload/service consists of multiple functions and APIs needed to be accessible for the services to be used. Sometimes individually, sometimes in sequence. Understanding the intricacies of connectivity between them and the ability to track them gives us the ability to indicate if an outage is affecting for instance all of OneDrive (including the web interface) or only when opened in the desktop client. Giving you the ability to redirect your users in the case only one of the two clients is affected.
The above examples are just a few of the many criteria and advanced calculations that come into play while calculating the users ‘expected normal’. In realty it consists of hundreds of millions of data points that are being calculated and considered every minute.
What scores do we identify?
We score the user experience both for individual users and the organization.
- User experience: This score indicates how the user’s current experience is in comparison to the user’s normal experience in similar circumstances.
- Organizational-wide user experience: This score helps you quickly identify if there are cohorts or sections of your organization that are having a less than optimal experience, which services are impacted and see trends.
Additionally, we score the Teams call quality:
- Call quality score: Based on our unique data that gives us minute by minute call metrics as it is ongoing, this score indicates how the overall quality was of a call based on the individual participant’s experience. For this score we weigh the impact that disturbances during the call have on the users themselves and on others. Like when a user’s sent video or audio quality while presenting is bad (which impacts all) versus whether a user is having a bad reception of incoming audio/video while on listening only mode (which impacts only them).
Requirements & considerations
To provide accurate user or organizational wide scoring we ideally need a minimum of 1 week of collected data and for accurate organizational scoring ideally at least 100 monitored users.