Need help with csv file import older timestamp records


I need to import a csv file, which gets updated every 15 minutes. The updated csv file may contains the data from past, ie. the current csv file contains data from 20-04-2018 2pm-3pm, then the next csv file may contains data from the past 18-04-2018 10AM-11AM.

I notice that if the latest csv file contains newer data (timestamp newer) then the histories records are imported. If the data is older, the histories records are not imported. Such as above case, the later csv file will not be imported.

I have played with a few settings such as “ignore duplicate timestamps” “ignore duplicate records” “clear history when older” … but none of them are relevant.

Could anyone give me some help please? Thanks.



Documentation for reference:
• Ignore Duplicate Timestamps – Ignore records whose timestamp is identical to the last record in the history.
• Ignore Duplicate Records – Ignore records whose timestamp, value and status are identical to the last record in the history.
• Clear History When Older – Clear the entire history when the timestamp of a record is older than the last record in the database.

When testing the above parameters with older time stamp data, have attempted with both ignore options off and the clearHistoryWhenOlder parameter set to true? There may be an ignore condition that prevents older history detection. Also, note the use of this method. This would clear all data that was present before that had already been discovered.

I’m also curious about the csv file being updated in the past after it’s already been updated in the future. This doesn’t seem like a normal historical trended system that’s being imported. Any insights into that file creation may lead to other configuration solutions.


Hi Jonathan
Thanks for your response.
I have tried to set both Ignores off, and clear history when older on. But it clears entire existing histories. I have tried a default N4 driver FileNetwork, which gives me the same result. eg. if I have history from 20-Apr-18 to 23-Apr-18, and want to import one entry on 22-Apr-18, which is impossible. Not sure if this is how the Tridium history database works? Or something can be done on your CSV driver?

We had the problem with this CSV system which is provided by 3rd party. iif a sensor is offline, and becomes online again, the server will send the CSV file with current data, and at the end of the day it will send the backlog data in one file. If the sensor is offline for more than one day, the end of day CSV could contain a few days data.


If I understand the situation correctly, it sounds like there is another system that is monitoring some number of devices and generating a single csv file for them but the contents of this file can’t be guaranteed to always be consistent due to device down time.

Since the Niagara platform is designed to only allow new histories to be appended, it is unlikely this is going to be a solution the CSV driver will be able to resolve independently and will likely take some additional effort either in the csv system and/or the Niagara station.

My suggestion to solve this at the station level (assuming the csv generation system can’t be modified) would be to have two imports setup to the same file. One being the primary which does the quick polling for near real time and the secondary one being maybe once a day configured to overwrite older histories. You would need to then write a custom program object that could read both of these histories on a similar schedule to the secondary import and merge them based on whatever criteria you determine is the better value to keep.


Thanks Jonathan for your great idea. I think I will set up two imports one for the real time data, the other one will have a few days delay assuming the 3rd party should not be offline for more than a few days.