Machine Learning

Learn about the risks of Adversarial AI in money, markets, and ML

It is hard to overestimate the importance of artificial intelligence (AI) and its subset, machine learning, in today's stock market.

While artificial intelligence (AI) refers to robots that can execute jobs that ordinarily need human intellect, machine learning (ML) includes learning patterns from data, improving the machines' capacity to forecast and make judgments.

One of the most common machine learning applications in the stock market is algorithmic trading. The ML models recognize patterns in massive volumes of financial data and then execute thousands upon thousands of transactions in fractions of a second. These algorithmic trading models are constantly learning and altering their predictions and actions, which can occasionally result in phenomena such as flash crashes, which occur when particular patterns initiate a feedback loop, sending specific market sectors into a precipitous freefall.

Despite its shortcomings, algorithmic trading has grown crucial to our financial system. It has a huge upside, which is another way of saying it earns a lot of money for certain individuals. According to the technology services firm Exadel, algorithmic trading has the potential to save banks US$1 trillion by 2030.

However, such dependence on machine learning models in finance is not without hazards – concerns beyond flash disasters.

Adversarial assaults are a substantial and unappreciated threat to these systems. These arise when malicious actors modify the input data to the ML model, causing the model to generate incorrect predictions.

One type of adversarial assault is "data poisoning," in which hostile actors inject "noise" — or misleading data — into the input. Training on contaminated data can lead to the model misclassifying whole datasets. For example, a credit card fraud system may incorrectly assign fraudulent activity where none exists.

Such manipulations are more than a hypothetical menace. Data poisoning and adversarial assaults have far-reaching consequences in machine-learning applications, including financial forecasting models. Researchers from the University of Illinois, IBM, and other universities proved the susceptibility of financial forecast models to adversarial assaults in a study. According to their research, these attacks might result in poor trading decisions, resulting in losses ranging from 23% to 32% for investors. This study emphasizes the potential severity of these dangers and the importance of strong defenses against adversarial attacks.

The financial industry's response to these attacks has frequently been reactionary — a whack-a-mole game in which defenses are raised only after an attack. However, because these dangers are inherent in the structure of ML algorithms, a more proactive strategy is the only option to address this continuing issue meaningfully.

Financial institutions must adopt strong and efficient testing and assessment techniques to detect possible vulnerabilities and mitigate these threats. Such implementation might entail rigorous testing methods, the use of "red teams" to mimic assaults, and the models being constantly updated to guarantee they are not compromised by hostile actors or bad data.