In History mode, an indicator is fed an entire bar of price data at a time, and calculates based on that data just as one might expect. However, in Testing mode, indicators are handled differently.
Specifically, in Testing mode the price data is fed to the Strategy being run on a tick-by-tick basis. If the Strategy calls an indicator from its code (using the CreateIndicator() and GetIndicatorValue() functions) the indicator will only be passed the price data that the Strategy has at the time of the most recent tick. It will receive an Open, High, Low and Close value, but they will be of an incomplete bar, such as the Strategy is seeing.
The Indicator also receives an Index value which tells it which bar it's calculating, but it gets the Index from the Strategy, which is different from the Index value it would normally get if it were running in History mode. In a Strategy, the Index of the current bar is always 0, and the index of the previous bar is always 1.
Here's the weird thing: When in Indicator is called by a Strategy, it receives one extra tick - that is, if the Strategy is processing a bar made up of four ticks, the indicator will actually be run five times -- it receives the first tick twice.
What makes it even weirder is that the first time it receives the first tick, it receives the wrong Index value. Like the Strategy, the current Index for the Indicator should always be 0, but for some reason the first (false) tick the Indicator receives comes with an Index value of 1.
Here's what it looks like:
Code: Select all
Tick #1 Index = 1 <-- This is the extra tick
Tick #1 Index = 0
Tick #2 Index = 0
Tick #3 Index = 0
Tick #4 Index = 0
Now in most cases this wouldn't cause any problems. If you're using M1 data and/or your Strategy checks the Indicator values for previous conditions such as MA crossovers, this little anomaly shouldn't cause you any grief. I ran into it because I programmed the Indicator to perform a certain calculation only on the first tick it receives and report that value to the Strategy as Buffer[Index], expecting to see the value in the current Index which is 0. Because the first tick that the Indicator sees is false (a duplicate of tick #1 that is mysteriously labeled as Index 1) my value was always being written to the wrong place in the buffer, over-writing the value that was always there.
My solution was to program the indicator to ignore the first tick (the false one) and report the value on the second tick (the first true tick). Weird, but it worked.
So, maybe Mike or Someone at FT can explain this weird behaviour (or fix the program to stop it) but for now at least it's known publicly, and it can be successfully programmed around.
Cheers-