Most read

ALL ABOUT THE ALGORITHMS

IN SEARCH OF THAT MARKETING EDGE

BY NEIL TOWNSEND

The market, defined here as the elements that converge to establish the price for any given commodity or service, has always been hard to discern. Its inscrutable nature promotes the dream of building a “black box” technology that removes human fallibility and emotion from the equation.

In the stock and commodity realm, a trader aims to beat the market. The goal is to be right about price changes in terms of direction (up and down) and magnitude (big and little).

Success, in theory, is measured by correctly identifying and timing the next big upward move. This equates to “beating” the average by a significant margin, trading above the 80th or 90th percentile. In practice, this goal is a mirage. A successful trader hopes to be right just more than half the time and enlarge gains through the prudent allocation of bet size. In other words, to put more value at risk when the model shows its highest probability of a particular beneficial outcome.

What does this mean in the real world? Traders and hedge funds that “win” trade profitably around 51 per cent of the time. Gains are increased through judicious allocation of bet size and may be further amplified through repetitive, high-frequency trading. The final sequence is typically a reduction of margin. Other market participants identify a competitor’s success and seek to duplicate it. The original trader must then move to a new trade or adapt the strategy.

Traders and hedge funds seek an edge. When they know something others don’t, information can provide such an advantage. This could be so-called ground intelligence. Sources in a certain geographic area pinpoint a production problem earlier than the general market. Increasingly, information imbalance has been reduced. The broader market gets most information in real-time.

Increasingly, edge is seen as a product of algorithmic models. The acceleration and democratization of machine learning and AI have reduced the cost of black box development. Computing power is cheaper than in the past. Increasingly, software tools can be built to trade in a dispassionate, profit-maximizing manner. Moreover, the computer can learn as it goes and theoretically maintain an edge as the rest of the market catches up.

In one sense, the whole thing sounds easy: just add water. However, the Beautiful Mind advances in computing power and problem-solving architecture have been hamstrung by one factor. Let’s call it the garbage in, garbage out dilemma, which is particularly acute in the agricultural sphere.

The data-centred issue is two-fold. First, there is a general lack of verifiable data sets. There are very few historical and accurate price data sets. For instance, in Western Canada, many farmers grow pulses and other special crops, but price discovery is hard. Computing power alone can’t fix the wonky nature of the data sets that do exist. Secondly, the data being collected has degraded in quality. Agricultural reporting agencies the world over struggle to deliver timely data and maintain its veracity.

Thus, while technology marches forward and its proponents champion potential applications, the dedication of resources and refinement of agricultural data gathering has not kept pace. That much-sought-after edge can undoubtedly be improved by machine learning and AI. Moreover, as the cost of this technology is reduced it should be democratized. However, without assurances verifiable data sets will be maintained and improved there will always be challenges to broadening the effectiveness of these tools for agricultural markets.  

Neil Townsend is chief market analyst with FarmLink Marketing Solutions.

Comments

Be the first to comment on this article

Leave a Reply

Go to TOP