Business and Finance

How CEO lies can boost stock ratings and deceive even respected financial analysts

Published

on

The multi-billion collapse of FTX – a famous cryptocurrency exchange whose founder is now is awaiting trial on fraud charges – serves as a stark reminder of the risks of fraud within the financial world.

FTX founder Sam Bankman-Fried’s lies return to precedent days right on the very starting of the corporate, prosecutors say. As a part of this, it’s alleged, he lied to each customers and investors U.S. Attorney Damian Williams called “one of the largest financial frauds in American history.”

How could so many individuals have apparently been fooled?

A brand new study published within the Strategic Management Journal sheds some light on this issue. In it, my colleagues and I discovered that even skilled financial analysts fall for the CEO’s lies – and that probably the most respected analysts would be the most naive.

Financial analysts provide expert advice to assist corporations and investors generate profits. They predict how much the corporate will earn and suggest whether to purchase or sell its shares. By directing money to good investments, they assist develop not only individual enterprises, but your entire economy.

Although financial analysts are paid for his or her advice, they aren’t oracles. How professor of management, I wondered how often people get fooled by lying managers – so my colleagues and I used machine learning to search out out. We have developed an algorithm, trained on transcripts of S&P 1500 earnings calls from 2008–2016, that can reliably detect fraud In 84% of cases. Specifically, the algorithm identifies various linguistic patterns that occur when an individual lies.

Our results were striking. We found that analysts were significantly more more likely to provide “buy” or “strong buy” recommendations after listening to fraudulent CEOs – by a mean of just about 28 percentage points – than their more honest counterparts.

We also found that highly regarded analysts fell for CEO lies more often than their lesser-known counterparts. In fact, analysts named “stars” by industry publisher Institutional Investor were 5.3 percentage points more likely than their lesser-known counterparts to advance to the upper tier of typically dishonest CEOs.

While now we have applied this technology to realize insight into this area of ​​finance for educational research, its broader application raises quite a lot of difficult ethical questions on the usage of artificial intelligence to measure psychological constructs.

Prone to belief

This seems counterintuitive: Why do skilled financial advisors consistently fall for lying managers? And why do probably the most reputable advisors appear to have the worst results?

These findings reflect the natural human tendency to assume that others are honest – the so-calledtruth bias” This habit makes analysts as vulnerable to lies as anyone else.

Moreover, we found that increased status promotes greater deviation from the reality. First, “celebrity” analysts often turn into overconfident in themselves and their entitlements as their prestige increases. They come to imagine that they’re less more likely to be defrauded, which leads them to take CEOs literally. Second, these analysts are likely to have closer relationships with CEOs, research shows increase error in reality. This makes them even more vulnerable to fraud.

Given this gap, corporations should want to reassess the credibility of “all-star” designations. Our study also highlights the importance of accountability in management and the necessity for strong institutional systems to counter individual biases.

AI “lie detector”?

The tool we developed for this study can have applications beyond the business world. We validated the algorithm using fake transcripts, retracted medical journal articles, and deceptive YouTube videos. It can be easily applied in various contexts.

It is significant to notice that this tool does indirectly measure fraud; identifies linguistic patterns associated with lying. This signifies that even though it is extremely accurate, it’s vulnerable to each false positives and negatives, and false accusations of dishonesty particularly can have devastating consequences.

What’s more, tools like this try to differentiate socially helpful “white lies” – which enhance a way of community and emotional well-being – from more serious lies. Indiscriminately flagging any fraud can disrupt complex social dynamics, resulting in unintended consequences.

These issues have to be addressed before any such technology is widely adopted. But that future is closer than many could imagine: these are corporations in fields akin to investing, security and insurance I’m already beginning to use it.

Many questions remain

Widespread use of artificial intelligence to detect lies would have profound social consequences – particularly, making it harder for the powerful to lie without consequences.

This may sound like an unambiguously good thing. While this technology offers undeniable advantages, akin to early detection of threats or fraud, it can even be disruptive a dangerous culture of transparency. In such a world, thoughts and emotions can turn into subject to measurement and judgment, destroying the sanctuary of mental privacy.

This study also raises ethical questions on the usage of artificial intelligence to measure psychological characteristics, particularly relating to privacy and consent. Unlike traditional deception research, which relies on humans consenting to be tested, this AI model works covertly, detecting nuanced linguistic patterns without the speaker’s knowledge.

The consequences are astonishing. For example, on this study, we developed a second machine learning model to evaluate the extent of suspicion in a speaker’s tone. Imagine a world where social scientists can create tools to evaluate any aspect of your psychology, applying them without your consent. Not very attractive, right?

As we enter a brand new era of artificial intelligence, advanced psychometric tools offer each promise and risk. These technologies have the potential to revolutionize business by providing unprecedented insight into human psychology. They can also violate human rights and destabilize society in surprising and disturbing ways. The decisions we make today – regarding ethics, oversight and responsible use – will set the course for years to come back.

This article was originally published on : theconversation.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version