History importing and data continuation


#1

Hi All,

I have a customer who has a few years’ worth of historical metering data in an SQL database. They would like to remove all of the data from the SQL database in csv format and store everything in tridium. The Kodaro driver looks like the best option to be doing this with. I have downloaded and played around with the demo station but cannot get an example file of the data to be read successfully as described in the online video.

I have a few questions about the driver before purchasing:

  • Is the demo station a full working demo without restrictions for 2 hours or are there restrictions that are preventing me from running my sample file? (I’ve tried running it from the station file system and my local C: drive without success).

  • Are there any limits on the file size/amount of data that can be run in one instance (some of my client’s files are between 30-90mb which is massive for a basic csv file)?

  • Can the histories that the driver creates be added to from a numeric point after creation? – Basically, if I create a history from a csv file, I would then like to assign more data from a numeric point going forward to that history using the same history identifier.

  • Can the driver be moved from one supervisor to another as once this job is complete the driver will no longer be needed by the client so we would like to reclaim it and use it elsewhere if possible?

  • Can the history name be formatted from the unique identifier within a column or row of the csv file?
    i.e. StationName/BuildingName/Floor/Unique_identifier_from_csv.


#2

Demo station: There are no restrictions in the 2 hour demo window. All features are unlocked in this time period. You may be having issues with file access depending on where the file is located as there is security around the Niagara application and file access. Screen shots to configuration and file location would be helpful in diagnosing this.

File Limits: There are no limits built into the driver itself but you may hit issues based on available memory. If a large enough file is read and processed in memory, this could cause failures if there is not enough allocated to Niagara to handle it.

History From point: This can be done but it means the point has to create all the histories. You can accomplish this by using filters directly against the source component of the import. There is an example of this in the demo station provided. You would then put the history extension on the numeric point. This would create histories from each entry from the csv (assuming cov) from the point as the source of the history. You can then break the csv link if necessary and replace with the live data going forward.

Moving the driver: Re-generating a new license key is possible but generally only reserved for failed/replaced hardware. When the license key is generated, it is bound to the host id and will always be available on that installation so there is no mechanism to move the license without the designed obsolescence of the initial installation.

History naming: The history naming can be done in the same manner as a normal history extension using BFormat notation. If using the method described above for merging into a numeric point, then you would create the proper structure and naming pattern there. Otherwise, you will need to make the same adjustments directly in the history extension of the driver.


#3

Hi Jonathan,
Thanks for the quick response.
I’ve been playing around with the driver but I’m still not able to get the file to work, I’ve tried the file both on coming from the C: and locally from the supervisor file structure both with no result. The demo data is great but ideally i’d like to get my own file working as an example before buying the driver. I’ve attached a screenshot as requested. (new user so can only post one at a time)

51

Cheers
Rich


#4

57


#5

What exactly is the error your seeing? Not resolving the file? Not processing the file as expected?

If it’s not resolving, it may be due to the file path resolving outside the allowed scope of the file system. As you can see from the demo station, the file path for the local station directory is different than what you’ve currently configured.

image


#6

Hi Jonathan,
Ive been in and out of working on this between other jobs but i’m still not able to get the file to read when I try to discover the columns. I’m assuming that this is a security issue as i’m using foxs and not fox. is there a workaround you could suggest I could try? I’ve attached a screenshot of the setup i’m using and the file location on my supervisor.

Cheers
Rich
16


#7

The setup looks fine but I don’t see any error provided. What issue are you experiencing? Not resolving the file? Not processing the file as expected?


#8

Hi Jonathan,
The issue is when I try to discover the columns, nothing happens, I get the success notification in the corner but none of the data appears in the discover window (as in the demo video). If I set the file to ‘fail if not found’ then it produces a failed message so I’m assuming it’s some sort of permission error on N4?
18


#9

Remove the foxs:| from the file path. It should just read:

file:^testdata.csv


#10

Success! that worked.
Something so simple but not obvious when using foxs.
Thanks for your help, I’ll play with the driver and if all is well place the order!


#11

Hi Jonathan,
got it doing exactly what I want with one exception. Where i’m importing the history value (column 3) I would like to name the history file whatever the value is in column 1. What would be the label I would need to input into the history name e.g. History Name = %Column1Row1% = Energy Meter

Cheers
Rich


#12

Hi Jonathan,
Apologies for so many questions, I only really use tridium for metering purposes and don’t venture too far out from that, putting historical data onto existing points seems more complex than it sounds.

I’m trying to get the csv file to filter through to a numeric point that has an existing ‘numeric interval’ history extension. The idea is that we import all of the historic files into tridium and then use the same histories moving forwards. Is this even possible with numeric intervals, as the history file would be recording based upon it’s existing time stamp rather than the historical one so i’m assuming there would be a need to force a timestamp onto the history file somehow?

How would I set the filters to complete something like this?

Cheers
Rich


#13

Hi Jonathan,
I’ve been playing with the driver more recently and the only issue I currently have is the marrying of the historical data to live data. The live data has a numeric interval history on the point, however, the csv import driver only has a cov history type, therefore if I import the historical csv data, lets say its called ‘energy meter 101’ and have a live history with the same name, the historic data will still be separate from the live data (.cfg0 on the end of the history name) I can’t seem to find a way using the filters to forward the data onto the point and allow it to be represented as one single history for the client to view? Is there a way to merge these histories through the driver?

Cheers
Rich


#14

You are running into some limitations of the Niagara’s history service as there has never been a merge concept introduced. The CSV driver is only designed to deal with csv data in and out. It’s intention is not to handle merging of existing data within the system, only to get data in and/or out and allow other systems implemented either where the data has been exported to or within the station to deal with data processing past that.

Generally, if data is coming in from a csv, it’s been trended elsewhere and needs to be processed by Niagara and is updated routinely. If there’s some static data from old historical trends that will only be updated once but then needs continuation of historical data from new live data, the csv driver is not going to deal with that scenario, only bring the data in. This solution would require some higher level system/program to deal with these issues.

What is the ultimate goal for having these two histories merged? Charting on PX? nAnalytics? Archiving?