Skip to main content
Version: 2.6

The Coffee Report

This example shows following aspects:

  1. Configuring a processing network with data aggregation
  2. Configuring a SQL DataLogger

Introduction to SQL DataLogger

Reporting to SQL Database requires an exact definition of the SQL table. The configuration of the mapping between HumanOS data structure (e.g. generic entity) and the SQL table is done within the DataLogger configuration.

Step 1: Create the JSON File

The source of the Coffee Report is a JSON file.

  1. Place the JSON file in C:\Temp\CoffeeProject\coffee.json

  2. Add following content:

    {
    "strength": "7/12",
    "milkamount": "15ml",
    "coffeeamount": "21ml",
    "brand": "Nespresso",
    "name": "Venezia",
    "capsulenetweight": "5,6g"
    }

Step 2: Create a Designer Project

  1. Create an IoT project with HumanOS IoT Designer. Check Tutorial 1 Example for details about OPC-UA Bridges.

  2. Add a new device template for JSON Files called Coffee.
    In this example we use the empty device template.

    Create Template

  3. Create a device structure with data nodes

    Device Info Model

    Create the following data nodes and set the PortMatchingIds and data types correctly:

    The group RawData contains the data nodes accessing the JSON data.

    NameAddressPortMatchIdData Type
    Brand$.brandBrandTypeSystem.String
    CapsuleNetWeight$.capsulenetweightCapsuleNetWeightTypeSystem.String
    CoffeeAmount$.coffeeamountCoffeeAmountTypeSystem.String
    MilkAmount$.milkamountMilkAmountTypeSystem.String
    Name$.nameNameTypeSystem.String
    Strength$.strengthStrengthTypeSystem.String

    The group ProcessedData contains data nodes processed by some processing network.

    NamePortMatchIdData Type
    CapsuleNetWeightCapsuleNetWeightProcTypeSystem.Double
    CoffeeAmountCoffeeAmountProcTypeSystem.Double
    CoffeePercentCoffeePercentProcTypeSystem.Double
    MilkAmountMilkAmountProcTypeSystem.Double
    MilkPercentMilkPercentProcTypeSystem.Double
    StrengthPercentStrengthPercentProcTypeSystem.Double
    NOTE

    These data nodes do not have any address, since the data is not coming from a data source.

    The data node ProcessedData contains the aggregated data for SQL data logger.

    NamePortMatchIdData Type
    ProcessedDataRecordProcessedDataRecordTypeSystem.Double
  4. Create a processing network.

    • Create CoffeeProcessor as CSharpScripted Processor
    • Create AggregateProcessor as DataAggregator
    NOTE

    Make sure that the port matching is correctly configured.

    • Check that data type of data node and port matches
    • Check that PortMatchId is correctly set on ports
    • Check that the protocol is set to EventPassing
    • Add PortMatchingRule to your device.

  5. Create a C# Script to transform the input values from JSON file into numerical values of our network.

    • Right click on the plugin HumanOS.UHAL.FileReader and select Add New Item.

    • Create a processing script called CoffeeProcessorScript.cs.

    • Go to the device information file and select the CoffeeProcessor. Set the property C# Script File to CoffeeProcessorScript.cs.

  6. Write the processor script to convert the data

    Following routine can be used to extract the numerical values from strings like 15g:

     /// gets a double value from a string
    private double getDoubleValue(string strValue)
    {
    double fRetval = double.NaN;
    Regex TakeValues = new Regex("^[\\d|\\.|\\,]+");
    string strLetterfree = TakeValues.Match(strValue).Value.Replace(",", ".");
    if (!string.IsNullOrEmpty(strLetterfree))
    {
    fRetval = double.Parse(strLetterfree);
    }
    return fRetval;
    }
    NOTE

    For regular expression syntax, check the Online RegExp Tester.

    For available namespaces and classes, check the NameSpace and Class List.

  7. Validate your Script file before running the gateway:

Step 3: Create JSON Device

The coffee device template is now ready. In this step we create a new concrete device out of this template:

  1. Create a device within the default-Target.

    Create Device

    NOTE

    Use the address C:\Temp\CoffeeProject\coffee.json to connect to the JSON file.

  2. Build and run HumanOS IoT Gateway

  3. Connect with OPC-UA client to the OPC-UA server. Use the connection address opc.tcp://localhost:4840.

Step 4: Change Content in JSON

Open the JSON file in an editor, e.g. Visual Studio Code.

Try the following changes. What happens on the output?

  1. Change the milkamount to unknown. What happens to the value in OPC-UA?

    Answer

    The processed values change to 'nan' (not a number).


  2. Change the JSON to be an invalid document. What changes in OPC-UA?

    Answer

    The processed and aggregated values change its state to 'BadStateNotActive'.

    The raw data nodes change their state to 'BadDataUnavailable'


Step 5: Configure SQL DataBase

The next step is to prepare the SQL data base. The goal is to report the aggregated data to an SQL table.

NOTE

The example works for MySQL, MariaDB, MSSQL and AzureSQL.

The example illustrates the procedure MySQL.

  1. Create an new SQL Table in your DataBase, e.g. humanos.datalogger.

    CREATE SCHEMA `humanos.datalogger` ;
  2. Create a table, e.g. datastream1.

    • We are using the field Id of type System.Guid as a primary key in our table.
    • The field TimeStamp should represent the timestamp of the data record.
    • The field State should represent the data state of the record (Good or BadNotActive).
    CREATE TABLE `datastream1` (
    `Id` varchar(45) NOT NULL,
    `TimeStamp` datetime DEFAULT NULL,
    `MilkPercent` double DEFAULT NULL,
    `CoffeePercent` double DEFAULT NULL,
    `StrengthPercent` double DEFAULT NULL,
    `State` int(11) DEFAULT NULL,
    PRIMARY KEY (`Id`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci;

Step 6: Configure SQL DataLogger

  1. Right click on Plugins and Add new Plugin.

  2. Select the SQL Data logger plugin

  3. Configure the Data Logger

    • Select the SamplerType to mysql
    • The server connection string, like Server=localhost;SslMode=none;Database=humanos.datalogger;Uid=user;Pwd=mypassword;
    • Sampler type to OnlyLastValue
    • Sampling Rate to 0 (zero)

  4. Configure a data set with node filter node.hasProperty<bool>("EnableSql", true)

    :::node NOTE

    The property EnableSql must be set on the data node ProcessedDataRecord.

  5. Create the data set fields. Each field represents a column in your SQL table. The field name must match the column name.

    • To select the correct entity field, set the Data Query. For instance, to select the MilkPercent field, use item.getValue<double>("MilkPercent").
  6. Build and run HumanOS IoT Gateway

  7. Check the table content of your SQL DataBase.

    NOTE

    Note, that values with state BadNotActive are also reported. To prevent logging such data, select the data logger option Only Valid Data.

Questions

  1. The data within one row of your SQL table contains inconsistent values. Often there are several lines added at once, where as the final line contains the correct data set. What is the cause? And what must be done?

    Answer

    Since our CoffeeProcessor script processes the data sequently, each time writing the property an event is passed to the data aggregator processor. And each time, the data aggregator is triggered and produces an entity from the current values on its inputs.

    The solution is to collect first all data before the data aggregator is triggered. There are two ways:

    1. Use the DelayTime property to delay the inputs on the data aggregator.
    2. Use one of the input port as a trigger port, and set the TriggerType of all other ports to none. Then make sure that sending data to the trigger port is the last instruction of your script file.

  2. There are "empty" lines in your SQL Table, especially when starting the gateway. How can you get rid of these empty lines?

    Answer

    These empty line are coming from the fact, that all data is processed by the data logger, even if the data state is BadNotActive.

    To prevent logging such data, select the data logger option Only Valid Data.


  3. The data aggregator gets triggered even when some of the data node has no valid data. How can you change the behavior, that only valid data are processed by the data aggregator?

    Answer

    By default, all changes on any input port triggers the processor.

    To prevent that, select the option Mandatory Port. The processor is triggered only if all mandatory ports have valid data.