Time Weighted Average
get(connection, parameters_dict)
A function that receives a dataframe of raw tag data and performs a time weighted averages, returning the results.
This function requires the input of a pandas dataframe acquired via the rtdip.functions.raw() method and the user to input a dictionary of parameters. (See Attributes table below)
Pi data points will either have step enabled (True) or step disabled (False). You can specify whether you want step to be fetched by "Pi" or you can set the step parameter to True/False in the dictionary below.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
connection |
object
|
Connection chosen by the user (Databricks SQL Connect, PYODBC SQL Connect, TURBODBC SQL Connect) |
required |
parameters_dict |
dict
|
A dictionary of parameters (see Attributes table below) |
required |
Attributes:
Name | Type | Description |
---|---|---|
business_unit |
str
|
Business unit |
region |
str
|
Region |
asset |
str
|
Asset |
data_security_level |
str
|
Level of data security |
data_type |
str
|
Type of the data (float, integer, double, string) |
tag_names |
list
|
List of tagname or tagnames |
start_date |
str
|
Start date (Either a utc date in the format YYYY-MM-DD or a utc datetime in the format YYYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) |
end_date |
str
|
End date (Either a utc date in the format YYYY-MM-DD or a utc datetime in the format YYYY-MM-DDTHH:MM:SS or specify the timezone offset in the format YYYY-MM-DDTHH:MM:SS+zz:zz) |
window_size_mins |
int
|
(deprecated) Window size in minutes. Please use time_interval_rate and time_interval_unit below instead. |
time_interval_rate |
str
|
The time interval rate (numeric input) |
time_interval_unit |
str
|
The time interval unit (second, minute, day, hour) |
window_length |
int
|
Add longer window time in days for the start or end of specified date to cater for edge cases. |
include_bad_data |
bool
|
Include "Bad" data points with True or remove "Bad" data points with False |
step |
str
|
data points with step "enabled" or "disabled". The options for step are "true", "false" or "metadata". "metadata" will retrieve the step value from the metadata table. |
display_uom |
optional bool
|
Display the unit of measure with True or False. Does not apply to pivoted tables. Defaults to False |
pivot |
bool
|
Pivot the data on timestamp column with True or do not pivot the data with False |
limit |
optional int
|
The number of rows to be returned |
offset |
optional int
|
The number of rows to skip before returning rows |
case_insensitivity_tag_search |
optional bool
|
Search for tags using case insensitivity with True or case sensitivity with False |
Returns:
Name | Type | Description |
---|---|---|
DataFrame |
DataFrame
|
A dataframe containing the time weighted averages. |
Warning
Setting case_insensitivity_tag_search
to True will result in a longer query time.
Note
display_uom
True will not work in conjunction with pivot
set to True.
Source code in src/sdk/python/rtdip_sdk/queries/time_series/time_weighted_average.py
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
|
Example
from rtdip_sdk.authentication.azure import DefaultAuth
from rtdip_sdk.connectors import DatabricksSQLConnection
from rtdip_sdk.queries import TimeSeriesQueryBuilder
auth = DefaultAuth().authenticate()
token = auth.get_token("2ff814a6-3304-4ab8-85cb-cd0e6f879c1d/.default").token
connection = DatabricksSQLConnection("{server_hostname}", "{http_path}", token)
data = (
TimeSeriesQueryBuilder()
.connect(connection)
.source("{tablename_or_path}")
.time_weighted_average(
tagname_filter=["{tag_name_1}", "{tag_name_2}"],
start_date="2023-01-01",
end_date="2023-01-31",
time_interval_rate="15",
time_interval_unit="minute",
step="true",
)
)
print(data)
This example is using DefaultAuth()
and DatabricksSQLConnection()
to authenticate and connect. You can find other ways to authenticate here. The alternative built in connection methods are either by PYODBCSQLConnection()
, TURBODBCSQLConnection()
or SparkConnection()
.
Note
See Samples Repository for full list of examples.
Note
server_hostname
and http_path
can be found on the SQL Warehouses Page.