--- title: Vendor Overview | Clarative description: The Vendor Overview gives you a description of the vendor, an Operational Risk Score, a risk report of all open risk items, and a history of risk events by severity. --- ## Operational Risk Score The Operational Risk Score is an overall score assigned to a vendor based off of Clarative’s operational risk data. To ensure your score is contextually accurate, be sure to [configure SLAs](/advanced/sla_configuration/index.md) and [Synthetic Monitoring](/advanced/synthetic_monitoring/index.md) to your organization’s needs. ![Risk Event](/_astro/operational_risk_score.CjbrFEGc_2gcHor.webp) Note The Operational Risk Score for an onboarded vendor is measured using data within your environment. This score will change as you configure your environment. You’ll know you’re seeing this score when you see a `Your Score` tag. In the Explorer tab of the Discover page, the Operational Risk Score is measured using all data that Clarative collects for that vendor. You’ll know you’re looking at this score when you see a `Public Score` tag. ### Operational Risk Score Breakdown The Operational Risk Score is the combination of several subscores. All scores range from 0 (worst) to 100 (best). - **Reported Incident Frequency (20% weight)** - Measures the average number of major or critical incidents per month using logarithmic scaling, where lower frequency results in higher scores compared to an industry benchmark across all of the vendors Clarative measures. - For example, if the average vendor reports 10 incidents a month (this benchmark may vary), a vendor that reports 10 incidents per month will have a score of 50, a vendor that reports 3 incidents a month will have a score of around 75, and a vendor that reports 27 incidents a month will have a score of around 25. - **Reported Downtime (20% weight)** - Evaluates the average hours of downtime per month using logarithmic scaling, comparing against an industry benchmark across all of the vendors Clarative measures, where less downtime yields better scores. This score is calculated using a similar formula to Reported Incident Frequency. - For example, if the average vendor reports 10 hours of downtime a month (this benchmark may vary), a vendor that reports 10 hours of downtime per month will have a score of 50, a vendor that reports 3 hours of downtime a month will have a score of around 75, and a vendor that reports 27 hours of downtime a month will have a score of around 25. - **Incident Communication Quality (10% weight)** - Assesses the thoroughness of incident communications by measuring detail provided in incident reports against an industry benchmark across all of the vendors Clarative measures, rewarding more detailed incident reports. This score is calculated by comparing the detail provided in an incident report against the average incident detail and normalizing on a scale from 0-100, where a score of 100 is as detailed as the average incident report. - **Heartbeat Success Rate (20% weight)** - Tracks the reliability of monitored endpoints with a baseline requirement of 99% uptime, scoring vendors on their ability to maintain availability above this threshold. - **Heartbeat Latency (20% weight)** - Measures average response times of monitored endpoints, awarding perfect scores (100) for latency under 200ms and decreasing linearly to a score of 0 at 5,000ms. - **Monitoring Coverage (10% weight)** - Evaluates the breadth and quality of monitored URLs, with higher value endpoints (APIs, authentication, web applications) weighted more heavily than documentation or marketing sites. If no endpoints are monitored, this score is 0. If several high value endpoints (APIs, authentication endpoints, web applications) are monitored, this score is 100. Tip You can improve the Monitoring Coverage score in your environment by adding additional [Synthetic Monitors](/advanced/synthetic_monitoring/index.md). --- ## Need Help? Contact support at ****.