AI trading and the limits of EU law enforcement in deterring market manipulation

  • by

During the 2008 global financial crisis, there were warnings from the indicator in early October of 2008, just before the market collapsed. For the COVID-19 crisis, a huge soar in the indicator was followed by a plunge in stock prices in March. As stock prices were beginning to recover from their all-time low, the indicator sent signals to re-enter the stock market. The model uses artificial intelligence to interpret risk signals based on the directional movement of the stock market , stock market risk , and relationships between the stock market and major asset groups . While AI is still developing, it can already be used to mitigate risk in some key areas. For example, machine learning can support more informed predictions about the likelihood of an individual or organization defaulting on a loan or a payment, and it can be used to build variable revenue forecasting models.

  • One would expect to find outsets of such an evaluation in the burgeoning field of AI ethics.
  • To make smart judgments, forex trading, one of the world’s biggest and most liquid financial markets, mainly depends on technical improvements and data-driven research.
  • The speed at which most algorithmic high-frequency trading takes place means one errant or faulty algorithm can rack up millions in losses in a short period.
  • In some cases, the adviser will be an AI program and this process will be executed online.
  • In these cases, they’ll work closely with a programmer to develop the system and ensure it functions properly.
  • Reuters, the news and media division of Thomson Reuters, is the world’s largest multimedia news provider, reaching billions of people worldwide every day.

Executives have a lot to learn from the multiyear efforts of institutions such as the OECD, which developed the first intergovernmental AI principles . The OECD principles promote innovative, trustworthy, and responsibly transparent AI that respects human rights, the rule of law, diversity, and democratic values, and that drives inclusive growth, sustainable development, and well-being. They also emphasize the robustness, safety, security, and continuous risk management of AI systems throughout their life cycles. So how should executives manage the existing and emerging risks of machine learning? Developing appropriate processes, increasing the savviness of management and the board, asking the right questions, and adopting the correct mental frame are important steps.

3.1. Data management, privacy/confidentiality and concentration risks

Thus, AI programmers and users might be unsure about the causal contribution of their actions to the negative collective final outcome which is the very subject of moral evaluation. The “breach” of culpable causation between individual decisions and joint consequences widens the “responsibility gap” in the case of AI-induced systemic risks. Although most trades (ca. 80%) are automated today, advanced AI techniques still drive a rather minor subset of trades and investments in financial markets .

What Are the Risks of AI in Trading

Governance arrangements and contractual modalities are important in managing risks related to outsourcing, similar to those applying in any other type of services. Finance providers need to have the skills necessary to https://xcritical.com/ audit and perform due diligence over the services provided by third parties. Over-reliance on outsourcing may also give rise to increased risk of disruption of service with potential systemic impact in the markets.

Treat machine learning as if it’s human.

The Treasury report [29, p. 33] concluded that “analysis of participant-level data in the cash and futures markets did not reveal a clear, single cause of the price movement during the event window on October 15”. Still, the report highlighted the strong interdependence between human and algorithmic market players as an issue that should be watched to better understand future market crashes. At the same time, the ethics of complexity would advocate reflection on the limitations of epistemic efforts and the consideration of morally relevant non-knowledge. In this respect, an important task of an ethical intermediary would be to investigate what is not known now, what cannot be known in principle and will most likely never be known.

What Are the Risks of AI in Trading

Work collaboratively with industry, stakeholders, other regulatory and supervisory authorities and foreign counterparts to share information and understand emerging trends relating to digital financial risks. Embed an understanding of consumer decision-making and the impact of behavioural biases in the development of policies to ensure a customer-centric approach. Data is the cornerstone of any AI application, but the inappropriate use of data in AI-powered applications or the use of inadequate data introduces an important source of non-financial risk to firms using AI techniques. Such risk relates to the veracity of the data used; challenges around data privacy and confidentiality; fairness considerations and potential concentration and broader competition issues. In the future, the use of DLTs in AI mechanisms is expected to allow users of such systems to monetise their data used by AI-driven systems through the use of Internet of Things applications, for instance. Traders can execute large orders with minimum market impact by optimising size, duration and order size of trades in a dynamic manner based on market conditions.

Other Risks of Algorithmic High-Frequency Trading

With AI, traders can avoid being swayed by celebrity endorsements or unfounded opinions. AI systems like IBM’s Watson can help traders analyze news, social media sentiment, and other data sources to make unbiased trading decisions. AI’s ability to analyze vast amounts of data at lightning speed is a game-changer for traders. Market data, trends, and news can be processed faster than any human could manage, allowing traders to make informed decisions and identify profitable trades. Because AI trading systems are logic-driven computers, they must be given absolute rules and requests. Unlike humans, computers cannot make assumptions or interpretations and must be told precisely what to do.Back-Testingis applying trading rules to historical market data.

What Are the Risks of AI in Trading

Finally, if you’re looking for more personalized support, you might consider hiring an expert to help guide you through the process and to offer advice and guidance along the way. Whatever your approach, there are plenty of options available to help you get started with using AI in finance, and to help you unlock the full potential of this exciting and rapidly evolving field. Bayesian inference can be used to update the probability of a hypothesis as new data becomes available.

Should the law prohibit ‘Manipulation’ in financial markets?

AI agency further exacerbates these already well-known law enforcement issues of market conduct rules. If not adequately developed, tested, and supervised by human experts, AI trading can lead to a number of unintended consequences, including optimised forms of market manipulation, which can ultimately undermine capital markets’ stability and integrity. Artificial Intelligence has the potential to revolutionize the financial industry in a big way. It can be used to predict future trends, improve accuracy, reduce costs, and enhance customer service.

What Are the Risks of AI in Trading

According to (, p. 944), it is “ethical to aim for diversity ” because the “diversity of narratives can be seen as an enormous source of resilience in complex systems”. At the current fledgling stage of AI ethics in business and finance, it is crucial that ethical intermediaries start consolidating the opinions of different professional groups before formulating ethical codes and principles. In the same vein, while discussing the ethics of high-frequency trading, Davis et al. argue not to focus solely on traders but to include “quants”, software engineers and computer specialists. More generally, Stix suggests that multi-stakeholder consultations and cross-sectional feedback are central for the development of actionable AI ethical principles. Although the use cases noted below may offer several potential benefits, they also involve potential challenges, costs, and regulatory implications. Each firm should conduct its own due diligence and legal analysis when exploring any AI application to determine its utility, impact on regulatory obligations, and potential risks, and set up appropriate measures to mitigate those risks.

The impact of AI on economic growth and international trade

The influence of the GDPR in shaping this regulatory paradigm is scrutinized, with reference to pertinent concerns that could compromise the effectiveness of this paradigm in the Chinese context. The central edifice is that this landmark statutory instrument, broadly resembling the GDPR with certain variations, revamps the existing data protection regime in China. Nevertheless, as in its current form, the law fails to address several critical matters.

How the machine ‘Thinks’: understanding opacity in machine learning algorithms

It also highlights the need for a balanced and proactive approach as to the regulation of AI in order to ensure that it is used in a safe and effective manner that promotes market integrity and investor protection. A thoughtful approach to explainability, setting up proper governance, model validation, AI trading data collection processes, and ensuring model robustness and reliability all go a long way to helping humans trust the output of an AI. Financial institutions are increasingly adopting AI “as technological barriers have fallen and its benefits and potential risks have become clearer,” the paper noted.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments