If, like me, you are on the mailing lists of some of the popular blogs, you would often come across articles titled something along the lines of “Lessons from [insert latest corporate scandal],” or “how you measure [insert buzz words such as “integrity” or “due diligence”]”. The fact that I have been consistently disappointed by the contents of these writings never seems to deter me from optimistically believing that, this time, I would see something different! Alas, my optimism has yet to be affirmed. These articles regularly reflect a common plague in the compliance space: a lack of precision and intellectual rigor.

Take one of many such articles I recently read (I am deliberately not citing the articles I am critiquing here because my intention is not to criticize any particular writers, but make an observation of a common phenomenon of which these are but examples). It offers a diagnosis of a major bank’s public scandal, based not on data, testing, and skillful interviews of people involved, but rather references to a half dozen news articles, and heaped clichés upon strung-together generalizations and speculations as if one were the natural conclusion of the other.

For example, the article charges that “corporate executives were turning a blind eye to fraudulent activity because they were making money.” This charge appears to be supported by the next sentence: “it was a common practice known throughout [the bank], to push sales…to meet the goals set by executives and get compensated for it.”

What sales organization do you know that does not push sales to meet ambitious goals? Is the implication that all such organizations are “turning a blind eye” to fraud?

The article observes that “it appears that [the bank]’s overall business culture was one of ‘profit before risk’”. What does that mean exactly? Was it profit before all risks, or just the particular type of risk involved? If it was all risks, wouldn’t there be other manifestations of this approach? If it was just this risk, how did it get distinguished from other types of risks in practice? What study was done on the business culture to come to this observation?

Another article, on a popular compliance blog, of the “measuring” variety claims that “[w]hen the culture and core values of an organization are not only strong but also reinforced daily,” a list of good things occurs, including higher employee engagement, more innovation, faster decisions, better reputation, fewer crises, and higher profits. There is no definition of what is culture, or what kind of culture is linked to these improved results. There is no indication of how the alleged improvements are measured. There is no data or controlled study to support these correlation claims, other than reference to a commercial survey that is almost equally vague. The article goes on to claim that the “[l]ack of integrity elevates the risk of negative surprises”, and the list of proof of such “negative surprises” includes:

  • “Target’s profit fell 34.3 percent after it’s [sic] data breach.” (Fact check: Target’s data breach was discovered in mid-December 2013. The stock price was at $62.15 on December 27, 2013, fell to a low of $56.06 on February 24, 2014, but climbed back to $62.54 as on February 28, 2014 and continued to ascent through July 2015)
  • “Well’s [sic] Fargo remains a train wreck.” What exactly does “train wreck” mean? Note that Wells Fargo’s stock prices had gone from $45.09 on September 27, 2016 to $53.84 on the day this article was published in April 2017.

Yet another blog post this past week proclaims that compliance/ethics programs offer the “outcomes” of “Greater trust/More integrity/Stronger Ethical cultures.” How are these things measured? How was the causation (that these are the outcomes of the programs) established? How do we know, in any given case, whether a good compliance program is the cause of stronger ethical culture, or a reflection of it?

I forwarded one of these articles to a business professor at a prestigious business school, and his reaction was “We wouldn’t let even an undergrad get away with that kind of spurious statistical inference! Even potentially more worrisome than a lack of quantitative measurement is when empirical analysis is carelessly employed to draw unsubstantiated conclusions.”

In my legal and compliance careers, I have been fortunate to have been trained to be precise in my writing: instead of using conclusory or inflammatory language, use facts; use data to back up – or better yet, replace – adjectives; distinguish between correlation and causation:

  • Instead of “this transaction violated that law,” state “the following facts about this transaction trigger the need for further analysis under [the specific statute];
  • Instead of “this type of payment is rare” (Rare according to whom or compared to what? Rare in which part of the company?), state “this type of payment occurred five times in the last twelve months in this particular business unit, out of X number of total payments.”
  • The fact that infant death rates are higher in countries with higher corruption indices (correlation) does not mean corruption causes infant deaths (causation) – a claim I once read in the literature of a prominent NGO; they may, for example, both be manifestations of another cause, such as poverty or political or economic systems.

I value precision as a matter of credibility: rather than hyperbolizing or misleading someone into agreement, I would rather they agree with me because of thoughtful reasoning, based on verifiable facts and data. I find that I earn more credibility when I do not simply proclaim something as “good” or “bad,” or “right” or “wrong,” but carefully lay out the benefits, deficiencies, uncertainties or problems, in quantifiable terms whenever possible. Most of all, I find exaggerations to undermine credibility.

As I read, on at least a monthly if not weekly basis, the types of writing that I described above, I am pained to see well-intended advocacy being severely compromised by a lack of precision. It becomes difficult for me to decide whether I agree with the ultimate points being made, because I am distracted by the lack of sound support or logic for those positions. To be clear, I am not advocating that all blog postings or marketing materials need to meet the precision standard of law review or scientific academic journals. I do, however, believe statements should be made with substantiation, evidence, and accuracy, and conclusions based on clearly articulated logic. You can still speculate, surmise, intuit, or opine, so long as you make clear that is what you are doing and not claim they are facts. Instead of generalizations and exaggerations, be more specific and concrete. If you are making an argument that is not substantiated by data or facts, I would be more inclined to hear it if you acknowledge that, instead of citing data or “facts” that do not actually support your position.

It would be a tremendous service to the compliance profession if we can all strive to hold each other accountable for precision, accuracy, and intellectual rigor in the statements we make. We can do this by raising questions as readers, audience, clients/customers, teachers, mentors, supervisor, or simply professionals reviewing the work of a peer: “What do you mean by that?” “What is the evidence that A causes B?” “How did you measure or verify that?” “What’s your source?” And when you get the answers, ask more questions: “How did you control for the other variables in the equation?” “How did you account for the potential bias in this form of measurement/reporting?”

My first day of law school, nearly three decades ago, the dean told the incoming students that the secret to success was “questions.” I agree with her. Question, probe, test, more question. That’s how we grow.

Hui Chen (Twitter @HuiChenEthics)

Leave a comment