Program trading vs algorithmic trading


















Many market participants blamed program trading for causing extreme volatility that contributed to significant market crashes in the s and 90s. As a result, the NYSE introduced rules that prevent program trades from being executed during certain times to minimize volatility.

Program trading restrictions are known as trading curbs or circuit breakers. According to NYSE rules, depending on the severity of the price action , all program trading may be halted or sell portfolios may be restricted to trading only on upticks. There are several reasons for program trading.

These include principal, agency, and basis trading. Some stocks may be dropped from the portfolio, and others added. Over the course of time, some stocks will rise and some will fall, resulting in a change to the overall portfolio value, as well as a change to the percentage allocation that each one of those stocks represents. Assume the hedge fund bought Apple Inc. APPL represents 9. Now, imagine that all 20 stocks are moving every day, and at the end of each month some will be worth 5.

A program trading algorithm can look at the portfolio equity and quickly execute all the trades at once, buying the stocks that are under-allocated and selling the ones that are over-allocated to rebalance the portfolio in seconds. Manually doing this would be much harder and more time-consuming. Portfolio Management. Investing Essentials.

Actively scan device characteristics for identification. Use precise geolocation data. Select personalised content. Create a personalised content profile. Measure ad performance. Select basic ads. Create a personalised ads profile. Select personalised ads. The diagram above shows the processing of an order from investment decision down to the execution of a trade.

Sponsored access is a step up from DMA, these tend to be ultra low latency, direct connections to the market. Sponsored access is predominantly used by clients exploiting high-frequency trading strategies.

Algorithmic trading refers to trade execution strategies that are typically used by fund managers to buy or sell large amounts of assets. They aim to minimise the cost of these transactions under certain risk and timing constraints. Such systems follow preset rules in determining how to execute each order. People often think of these systems as buying low and selling high — making investment decisions — but this is not the case.

Algorithmic trading systems are offered by many brokers and simply execute the orders that they are given. This leads to a language choice providing a straightforward environment to test code, but also provides sufficient performance to evaluate strategies over multiple parameter dimensions. This distribution includes data analysis libraries such as NumPy , SciPy , scikit-learn and pandas in a single interactive console environment.

The prime consideration at this stage is that of execution speed. Remember that it is necessary to be wary of such systems if that is the case! Ultimately the language chosen for the backtesting will be determined by specific algorithmic needs as well as the range of libraries available in the language more on that below.

However, the language used for the backtester and research environments can be completely independent of those used in the portfolio construction, risk management and execution components, as will be seen.

The portfolio construction and risk management components are often overlooked by retail algorithmic traders. This is almost always a mistake. These tools provide the mechanism by which capital will be preserved. They not only attempt to alleviate the number of "risky" bets, but also minimise churn of the trades themselves, reducing transaction costs. Sophisticated versions of these components can have a significant effect on the quality and consistentcy of profitability.

It is straightforward to create a stable of strategies as the portfolio construction mechanism and risk manager can easily be modified to handle multiple systems.

Thus they should be considered essential components at the outset of the design of an algorithmic trading system.

The job of the portfolio construction system is to take a set of desired trades and produce the set of actual trades that minimise churn, maintain exposures to various factors such as sectors, asset classes, volatility etc and optimise the allocation of capital to various strategies in a portfolio.

Portfolio construction often reduces to a linear algebra problem such as a matrix factorisation and hence performance is highly dependent upon the effectiveness of the numerical linear algebra implementation available. MatLab also possesses extensively optimised matrix operations. A frequently rebalanced portfolio will require a compiled and well optimised!

Risk management is another extremely important part of an algorithmic trading system. Risk can come in many forms: Increased volatility although this may be seen as desirable for certain strategies! Risk management components try and anticipate the effects of excessive volatility and correlation between asset classes and their subsequent effect s on trading capital. Often this reduces to a set of statistical computations such as Monte Carlo "stress tests". This is very similar to the computational needs of a derivatives pricing engine and as such will be CPU-bound.

These simulations are highly parallelisable see below and, to a certain degree, it is possible to "throw hardware at the problem". The job of the execution system is to receive filtered trading signals from the portfolio construction and risk management components and send them on to a brokerage or other means of market access. The primary considerations when deciding upon a language include quality of the API, language-wrapper availability for an API, execution frequency and the anticipated slippage.

The "quality" of the API refers to how well documented it is, what sort of performance it provides, whether it needs standalone software to be accessed or whether a gateway can be established in a headless fashion i.

I once had to install a Desktop Ubuntu edition onto an Amazon cloud server to access Interactive Brokers remotely, purely for this reason! Note that with every additional plugin utilised especially API wrappers there is scope for bugs to creep into the system. Always test plugins of this sort and ensure they are actively maintained.

A worthwhile gauge is to see how many new updates to a codebase have been made in recent months. Execution frequency is of the utmost importance in the execution algorithm. Note that hundreds of orders may be sent every minute and as such performance is critical. Slippage will be incurred through a badly-performing execution system and this will have a dramatic impact on profitability. Dynamically-typed languages, such as Python and Perl are now generally "fast enough".

Always make sure the components are designed in a modular fashion see below so that they can be "swapped out" out as the system scales. The components of a trading system, its frequency and volume requirements have been discussed above, but system infrastructure has yet to be covered. Those acting as a retail trader or working in a small fund will likely be "wearing many hats".

It will be necessary to be covering the alpha model, risk management and execution parameters, and also the final implementation of the system. Before delving into specific languages the design of an optimal system architecture will be discussed.

One of the most important decisions that must be made at the outset is how to "separate the concerns" of a trading system. In software development, this essentially means how to break up the different aspects of the trading system into separate modular components. By exposing interfaces at each of the components it is easy to swap out parts of the system for other versions that aid performance, reliability or maintenance, without modifying any external dependency code.

This is the "best practice" for such systems. For strategies at lower frequencies such practices are advised. For ultra high frequency trading the rulebook might have to be ignored at the expense of tweaking the system for even more performance. A more tightly coupled system may be desirable. Creating a component map of an algorithmic trading system is worth an article in itself. However, an optimal approach is to make sure there are separate components for the historical and real-time market data inputs, data storage, data access API, backtester, strategy parameters, portfolio construction, risk management and automated execution systems.

For instance, if the data store being used is currently underperforming, even at significant levels of optimisation, it can be swapped out with minimal rewrites to the data ingestion or data access API. As far the as the backtester and subsequent components are concerned, there is no difference. Another benefit of separated components is that it allows a variety of programming languages to be used in the overall system.

There is no need to be restricted to a single language if the communication method of the components is language independent. Performance is a significant consideration for most trading strategies. For higher frequency strategies it is the most important factor. Each of these areas are individually covered by large textbooks, so this article will only scratch the surface of each topic.

Architecture and language choice will now be discussed in terms of their effects on performance. The prevailing wisdom as stated by Donald Knuth , one of the fathers of Computer Science, is that "premature optimisation is the root of all evil".

This is almost always the case - except when building a high frequency trading algorithm! For those who are interested in lower frequency strategies, a common approach is to build a system in the simplest way possible and only optimise as bottlenecks begin to appear.

Profiling tools are used to determine where bottlenecks arise. Profiles can be made for all of the factors listed above, either in a MS Windows or Linux environment. There are many operating system and language tools available to do so, as well as third party utilities.

Language choice will now be discussed in the context of performance. Common mathematical tasks are to be found in these libraries and it is rarely beneficial to write a new implementation. One exception is if highly customised hardware architecture is required and an algorithm is making extensive use of proprietary extensions such as custom caches. However, often "reinvention of the wheel" wastes time that could be better spent developing and optimising other parts of the trading infrastructure.

Development time is extremely precious especially in the context of sole developers. Latency is often an issue of the execution system as the research tools are usually situated on the same machine. For the former, latency can occur at multiple points along the execution path. For higher frequency operations it is necessary to become intimately familiar with kernal optimisation as well as optimisation of network transmission.

This is a deep area and is significantly beyond the scope of the article but if an UHFT algorithm is desired then be aware of the depth of knowledge required! Caching is very useful in the toolkit of a quantitative trading developer. Caching refers to the concept of storing frequently accessed data in a manner which allows higher-performance access, at the expense of potential staleness of the data.

A common use case occurs in web development when taking data from a disk-backed relational database and putting it into memory. Any subsequent requests for the data do not have to "hit the database" and so performance gains can be significant.

For trading situations caching can be extremely beneficial. For instance, the current state of a strategy portfolio can be stored in a cache until it is rebalanced, such that the list doesn't need to be regenerated upon each loop of the trading algorithm. However, caching is not without its own issues. Regeneration of cache data all at once, due to the volatilie nature of cache storage, can place significant demand on infrastructure.

Another issue is dog-piling , where multiple generations of a new cache copy are carried out under extremely high load, which leads to cascade failure.

Dynamic memory allocation is an expensive operation in software execution. Thus it is imperative for higher performance trading applications to be well-aware how memory is being allocated and deallocated during program flow. Newer language standards such as Java, C and Python all perform automatic garbage collection , which refers to deallocation of dynamically allocated memory when objects go out of scope.

Garbage collection is extremely useful during development as it reduces errors and aids readability. However, it is often sub-optimal for certain high frequency trading strategies. Custom garbage collection is often desired for these cases. In Java, for instance, by tuning the garbage collector and heap configuration, it is possible to obtain high performance for HFT strategies. While potentially error prone potentially leading to dangling pointers it is extremely useful to have fine-grained control of how objects appear on the heap for certain applications.

When choosing a language make sure to study how the garbage collector works and whether it can be modified to optimise for a particular use case. Many operations in algorithmic trading systems are amenable to parallelisation. This refers to the concept of carrying out multiple programmatic operations at the same time, i. So-called "embarassingly parallel" algorithms include steps that can be computed fully independently of other steps. Certain statistical operations, such as Monte Carlo simulations, are a good example of embarassingly parallel algorithms as each random draw and subsequent path operation can be computed without knowledge of other paths.

Other algorithms are only partially parallelisable.



0コメント

  • 1000 / 1000