More Info, Reports
You can read more about our profile research here: https://blog.csrhub.com/2012/01/moving-beyond-green-the-rainbow-within-csr.html.
We face several methodological issues, when we approach our rating task:
A. Each source has its own perspective and reporting schema. We currently have more than 8,000 different ratings elements in our system. Within the area of “Board performance,”we may have sources who report that a company’s board is “good,”“C+,”or “2.3;”that a board is “diverse;”that “X% of the board members are women;”that a board meets “Y times per year;”that “the board is (or is not) involved in the sustainability process;”etc. To make sense of this mess, we have to map each element we ingest into one of the 12 parts of our schema. To make the system “fair,”we try to pull in roughly equal numbers of elements and data items for each of the 12 parts.
B. No source covers all companies. We already cover 18,057 companies in 136 countries. Our goal is to cover hundreds of thousands of companies in every country in the world. We will never have a single standard set of data against which we could scale and adjust everything else.
C. Sources tend to be biased or to have discontinuous ratings distributions. Once we have mapped a source, we can compare the ratings it gives for a company against all of the other ratings we have for that company. We can quickly determine if a source is biased positively or negatively. Some sources have only one value (e.g., “Yes, the company does have a policy”). We adjust each source’s results to fit the overall distribution for each subcategory and then adjust the overall distribution to reflect the input of the sources.
D. Some sources are more accurate than others. We see this again through a comparison with our other sources. When a source has poor accuracy, we reduce its weight in our system.
The above processes involve massive amounts of computation—we are following a “Big Data”approach. It would be impossible to explain to an outside user exactly why ten sources gave a score of 46 for one company and two of those source plus six others gave a score of 53 for another one. Not only would the conversions appear arbitrary without the support of our analysis (we currently have 180 million million data points in our system), but the conversions carry relatively little information. The most valuable information is the data we input from the sources (which we share) and our inventory of which sources track each company (which we also share).
We feel our industry and country averages also contain value. Due to the nature of our approach, all of our ratings are accurate to within less than 1.8 points. (In other words, if one company has a score of 51 and another has one of 49, we are 95% confident that the two scores are different.)
There are many reasons we took this approach:
A. The available facts are not truly comparable between companies. For instance, one company may include supplier contributions in its carbon use. Another may include employee travel. A third could include the effect of its mix of power sources. A fourth might not include its subsidiaries. A normal user who sees these four sets of numbers cannot draw a conclusion from them about which company has better carbon efficiency. In contrast, our expert sources can draw conclusions based on this type of data. By combining the input of several expert sources (after removing any biases we detect in their methodologies) we get a clear signal about which company appears to have better carbon management policies.
B. Each stakeholder group has a different focus and interest. A governance source may approve of a CEO who carefully supervises all of her or his operations. An employee source may dislike the same CEO, because she/he is a micromanager who interferes in day to day operations. A labor source will care less about management style and more about whether or not the CEO allows unions to recruit members and participate in workforce management. Those who try to collect and report facts miss these nuances and obscure the fact that sustainability performance must be measured using a context of personal and moral values.
C. Some of our sources charge large amounts for the use of their data. They are willing to allow us to ingest it into our system, but would be out of business if we passed through all the detailed data they have labored to collect.
If a company improves its policies and puts more emphasis on sustainability, its score in our system should eventually rise. However, the speed with which this happens may depend upon how well the company communicates the changes it has made. It may also depend on how each of the various stakeholder groups views these changes.
Our tool does not tell you how a company is doing. It tells you how everyone else who shares your view of the world (as described in your profile) would think it was doing, if they had access to all 610 of our sources. Our tool is designed to help sustainability practitioners get feedback from the “opinion marketplace”on their company’s perceived performance.
You can use our system to discover competitors who rank well and then do your own analysis of how they have attracted attention and praise for their work. As an alternative, CSRHub could introduce you to one of the sustainability consulting firms with whom we have partnered. Our partners could review your internal systems and programs, benchmark your performance against that of your competitors, and then advise you of the best ways to disclose and promote your sustainability achievements.
Search for Information
B. Go to the “Data”menu in Excel and pick “Text to columns”conversion.
C. In the menu for conversion, tell the wizard to use “semicolons”as a “delimiter.”
The reason we use semicolons as a delimiter is because many company names contain commas and other special characters. Fortunately, to date, none contain semicolons! This should result in a nicely formatted table with headings and each item in a sortable format.