Interpolation at Time Function
get(connection, parameters_dict)
An RTDIP interpolation at time function which works out the linear interpolation at a specific time based on the points before and after.
This function requires the user to input a dictionary of parameters. (See Attributes table below.)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
connection |
object
|
Connection chosen by the user (Databricks SQL Connect, PYODBC SQL Connect, TURBODBC SQL Connect) |
required |
parameters_dict |
dict
|
A dictionary of parameters (see Attributes table below) |
required |
Attributes:
Name | Type | Description |
---|---|---|
business_unit |
str
|
Business unit of the data |
region |
str
|
Region |
asset |
str
|
Asset |
data_security_level |
str
|
Level of data security |
data_type |
str
|
Type of the data (float, integer, double, string) |
tag_names |
str
|
Name of the tag |
timestamps |
list
|
List of timestamp or timestamps in the format YYY-MM-DDTHH:MM:SS or YYY-MM-DDTHH:MM:SS+zz:zz where %z is the timezone. (Example +00:00 is the UTC timezone) |
window_length |
int
|
Add longer window time in days for the start or end of specified date to cater for edge cases. |
include_bad_data |
bool
|
Include "Bad" data points with True or remove "Bad" data points with False |
pivot |
bool
|
Pivot the data on timestamp column with True or do not pivot the data with False |
display_uom |
optional bool
|
Display the unit of measure with True or False. Does not apply to pivoted tables. Defaults to False |
limit |
optional int
|
The number of rows to be returned |
offset |
optional int
|
The number of rows to skip before returning rows |
case_insensitivity_tag_search |
optional bool
|
Search for tags using case insensitivity with True or case sensitivity with False |
Returns:
Name | Type | Description |
---|---|---|
DataFrame |
DataFrame
|
A interpolated at time dataframe. |
Warning
Setting case_insensitivity_tag_search
to True will result in a longer query time.
Note
display_uom
True will not work in conjunction with pivot
set to True.
Source code in src/sdk/python/rtdip_sdk/queries/time_series/interpolation_at_time.py
20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
|
Example
from rtdip_sdk.authentication.azure import DefaultAuth
from rtdip_sdk.connectors import DatabricksSQLConnection
from rtdip_sdk.queries import TimeSeriesQueryBuilder
auth = DefaultAuth().authenticate()
token = auth.get_token("2ff814a6-3304-4ab8-85cb-cd0e6f879c1d/.default").token
connection = DatabricksSQLConnection("{server_hostname}", "{http_path}", token)
data = (
TimeSeriesQueryBuilder()
.connect(connection)
.source("{tablename_or_path}")
.interpolation_at_time(
tagname_filter=["{tag_name_1}", "{tag_name_2}"],
timestamp_filter=["2023-01-01T09:30:00", "2023-01-02T12:00:00"],
)
)
print(data)
This example is using DefaultAuth()
and DatabricksSQLConnection()
to authenticate and connect. You can find other ways to authenticate here. The alternative built in connection methods are either by PYODBCSQLConnection()
, TURBODBCSQLConnection()
or SparkConnection()
.
Note
See Samples Repository for full list of examples.
Note
server_hostname
and http_path
can be found on the SQL Warehouses Page.