Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Devguide 2.1 #18

Open
wants to merge 13 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
/src/solution/HandsOn_1/opcua_server/opcua_server_simulator/node_modules
/src/solution/HandsOn_2/opcua_server/opcua_server_simulator/node_modules
2 changes: 1 addition & 1 deletion LICENSE.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
MIT License

Copyright (c) 2022 Siemens Aktiengesellschaft
Copyright (c) 2023 Siemens Aktiengesellschaft

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
42 changes: 28 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,22 @@

Creating a first Industrial Edge App on a development environment to deploy it to an Industrial Edge Device based on App Developer Guide.

- [Prerequisites](#prerequisites)
- [Description](#description)
- [Contribution](#contribution)
- [Licence and Legal Information](#licence-and-legal-information)
- [My first Industrial Edge App - App Developer Guide](#my-first-industrial-edge-app---app-developer-guide)
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Description](#description)
- [Documentation](#documentation)
- [Contribution](#contribution)
- [License and Legal Information](#license-and-legal-information)
- [Disclaimer](#disclaimer)

## Prerequisites

Prerequisites are in the App Developer Guide which is available on [SIOS](https://support.industry.siemens.com/cs/ww/en/view/109795865). It contains description of the requirements as well as the step-by-step description how to work with this Developer Guide repository.
Prerequisites are in the App Developer Guide which is available on [industrial-edge.io](https://industrial-edge.io/developer/index.html). It contains description of the requirements as well as the step-by-step description how to work with this Developer Guide repository.

## Installation

If you would like to run the solution of this app you need to rename all files called "Dockerfile.example" to Dockerfile. These Dockerfiles are just an example how you could implement it.

## Description

Expand All @@ -19,16 +27,16 @@ As the example app will cover the most common use case in the Industrial Edge en

The app contains three parts – the connectivity to collect the data from the OPC UA Server by system apps, the IE Databus for distributions of the data and the process, storing and visualization of data in the Edge App.

1. The IE Databus based on MQTT is responsible to distribute data to certain topics, which are filled by system or custom apps by publishing and subscribing to these topics.
2. To receive the data from the OPC UA server, which is providing data from a PLC, the SIMATIC S7 Connector connectivity is used. SIMATIC S7 Connector is a system app, which publish the data to IE Databus. Another system app, the IE Flow Creator, consumes the data from the SIMATIC S7 Connector topics on the IE Databus. The data is preprocessed in the SIMATIC Flow Creator and published again on the IE Databus.
3. The developed data analytics container with Python is consuming the preprocessed data on the topics from the SIMATIC Flow Creator. The Python data analytics is doing calculations and evaluations and provides the results like KPIs back on the IE Databus.The data analytics container needs a MQTT client to handle the publishes and subscriptions of the IE Databus
4. The IE Flow Creator consumes the analyzed data again. The IE Flow Creator stores the (raw) data and analyzed data to the InfluxDB persistently.
5. The InfluxDB is a time series database which is optimized for fast, high-availability storage and retrieval of time series data. It stores the data, which is transmitted by the OPC UA server to the app as well as the analyzed data.
6. Grafana is a visualization and analytics software. It allows to query, visualize, alert and explore metrics. It provides tools to turn time-series database data to graphs and visualization. There is the possibility to build custom dashboards. Grafana is used to visualize the data from the InfluxDB database
1. The **IE Databus** based on MQTT is responsible for distributing data to certain topics, that are filled by system or custom apps by publishing and subscribing to these topics.
2. To receive the data from the OPC UA server, which is providing data from a PLC, the **OPC UA Connector connectivity** is used. OPC UA Connector is a system app, that publishes the data to IE Databus. Another system app, the SIMATIC Flow Creator, consumes the data from the OPC UA Connector topics on the IE Databus. The data is preprocessed in the SIMATIC Flow Creator before being published on the IE Databus again.
3. The developed **data analytics container** with Python is consuming the preprocessed data on the topics from the SIMATIC Flow Creator. The Python data analytics performs calculations and evaluations and returns the results as KPIs back to the IE Databus. To handle the IE Databus publishes and subscriptions, the data analytics container requires a MQTT client.
4. The **SIMATIC Flow Creator** consumes the analyzed data again. The SIMATIC Flow Creator persistently stores the (raw) and analyzed data in InfluxDB.
5. The **InfluxDB** is a time series database which is optimized for fast, high-availability storage and retrieval of time series data. It stores both the data transmitted by the OPC UA server to the app and the analyzed data.
6. The data stored in the database can be queried and graphed in dashboards to format them and present them in meaningful and easy to understand way. There are many types of dashboards to choose from including those that come with InfluxDB or other open source projects like Grafana. In this application, the native **InfluxDB Dashboards** are leveraged for basic data visualization.

## Documentation

- Here is a link to the [SIOS](https://support.industry.siemens.com/cs/ww/en/view/109795865) where the App Developer Guide of this application example can be found.
- Here is a link to the [industrial-edge.io](https://industrial-edge.io/developer/index.html) where the App Developer Guide of this application example can be found.
- You can find further documentation and help in the following links
- [Industrial Edge Hub](https://iehub.eu1.edge.siemens.cloud/#/documentation)
- [Industrial Edge Forum](https://www.siemens.com/industrial-edge-forum)
Expand All @@ -41,6 +49,12 @@ Additionally everybody is free to propose any changes to this repository using P

If you are interested in contributing via Pull Request, please check the [Contribution License Agreement](Siemens_CLA_1.1.pdf) and forward a signed copy to [industrialedge.industry@siemens.com](mailto:industrialedge.industry@siemens.com?subject=CLA%20Agreement%20Industrial-Edge).

## Licence and Legal Information
## License and Legal Information

Please read the [Legal information](LICENSE.txt).

## Disclaimer

IMPORTANT - PLEASE READ CAREFULLY:

Please read the [Legal information](LICENSE.md).
This documentation describes how you can download and set up containers which consist of or contain third-party software. By following this documentation you agree that using such third-party software is done at your own discretion and risk. No advice or information, whether oral or written, obtained by you from us or from this documentation shall create any warranty for the third-party software. Additionally, by following these descriptions or using the contents of this documentation, you agree that you are responsible for complying with all third party licenses applicable to such third-party software. All product names, logos, and brands are property of their respective owners. All third-party company, product and service names used in this documentation are for identification purposes only. Use of these names, logos, and brands does not imply endorsement.
Binary file modified docs/Picture_5_3_Architecture_IED.png
100755 → 100644
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ services:
options: # we use best pactice here as limiting file size and rolling mechanism
max-size: "10m" # File size is 10MB
max-file: "2" # only 2 files created before rolling mechanism applies
volumes: # mount volume from host
- mosquitto:/mosquitto:ro # set to read-only volume
ports: # expose of ports and publish
- "33083:1883" # map containers default MQTT port (1883) to host's port 33083
networks: # define networks connected to container 'mqtt-broker'
Expand All @@ -22,6 +24,9 @@ services:
###### NETWORK CONFIG ######
networks: # Network interface configuration
proxy-redirect: # Reference 'proxy-redirect' as predefined network
external: # Note: Please create the network manually as it is preexisting on Industrial Edge Device
name: proxy-redirect
name: proxy-redirect
driver: bridge

###### VOLUMES ######
volumes: # Volumes for containers
mosquitto:
7 changes: 7 additions & 0 deletions src/solution/HandsOn_1/my_edge_app/.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
BASE_IMAGE=python:3.9.2-alpine3.13
INFLUXDB_VERSION=2.4-alpine
INFLUXDB_DB=edgedb
INFLUXDB_DATA_INDEX_VERSION=tsi1
http_proxy=""
https_proxy=""
no_proxy=localhost,127.0.0.1
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
ARG BASE_IMAGE
FROM ${BASE_IMAGE}


RUN adduser -S nonroot
# install all requirements from requirements.txt
COPY requirements.txt /
RUN pip install -r /requirements.txt; rm -f /requirements.txt
Expand All @@ -12,5 +12,6 @@ WORKDIR /app
# Copy the current dir into the container at /app
COPY ./program/* /app/

USER nonroot
# Run app.py when the container launches
CMD ["python", "-u", "-m", "app"]
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ def on_message(self, client, userdata, message):
# print(message.payload)
# load = message.payload
new_msg = json.loads(message.payload)
self.logger.info('new message: {}'.format(new_msg))
try:
self.topic_callback[message.topic](new_msg)
except Exception as err:
Expand All @@ -75,13 +76,15 @@ def subscribe(self, topic, callback):

# Callback function for MQTT topic 'StandardKpis'
def standard_kpis(self, payload):
values = [key['value'] for key in payload]
values = [key['_value'] for key in payload]
name = [key['_measurement'] for key in payload]
self.logger.info('name is: {}'.format(name))
# Calculate standard KPIs
result = {
'mean_result' : statistics.mean(values),
'median_result' : statistics.median(values),
'stddev_result' : statistics.stdev(values),
'name' : payload[0]['name'],
'name' : payload[0]['_measurement'],
}
self.logger.info('mean calculated: {}'.format(statistics.mean(values)))
self.logger.info('median calculated: {}'.format(statistics.median(values)))
Expand All @@ -94,8 +97,8 @@ def standard_kpis(self, payload):
def power_mean(self, payload):
self.logger.info('calculating power mean...')

current_values = [item['value'] for item in payload['current_drive3_batch']]
voltage_values = [item['value'] for item in payload['voltage_drive3_batch']]
current_values = [item['_value'] for item in payload['current_drive3_batch']]
voltage_values = [item['_value'] for item in payload['voltage_drive3_batch']]
# Calculate mean of power
power_batch_sum = sum([current*voltage for current, voltage in zip(current_values,voltage_values)])

Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
### Docker Compose File for my Industrial Edge App ###
# This docker-compose file creates a preconfigured
# * Data Analytics container based in Python with Mqtt Connection
# * InfluxDB Container for Storage of Time Series data
# * Grafana Container for visualization of database content
# * InfluxDB Container for Storage of Time Series data and visualization

version: '2.4' # docker-compose version is set to 2.4

Expand All @@ -29,8 +28,6 @@ services:
max-file: "2" # only 2 files created before rolling mechanism applies
networks: # define networks connected to container 'data-analytics'
proxy-redirect: # Name of the network
depends_on: # Dependencie on other container
- grafana # Wait for start of container 'grafana'

##### INFLUXDB ######
influxdb:
Expand All @@ -39,47 +36,24 @@ services:
restart: unless-stopped # always restarts (see overview page 12 Industrial Edge Developer Guide)
mem_limit: 1400m
environment: # Environment variables available at container run-time
INFLUXDB_DB: $INFLUXDB_DB # Variable of INFLUXDB_DB will be set at runtime as well
INFLUXDB_DATA_INDEX_VERSION: $INFLUXDB_DATA_INDEX_VERSION
- DOCKER_INFLUXDB_INIT_MODE=setup
- DOCKER_INFLUXDB_INIT_USERNAME=edge
- DOCKER_INFLUXDB_INIT_PASSWORD=edgeadmin
- DOCKER_INFLUXDB_INIT_ORG=siemens
- DOCKER_INFLUXDB_INIT_BUCKET=edgedb
- DOCKER_INFLUXDB_INIT_RETENTION=1w
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=testtoken
logging: # allow logging
options: # we use best pactice here as limiting file size and rolling mechanism
max-size: "10m" # File size is 10MB
max-file: "2" # only 2 files created before rolling mechanism applies
volumes: # mount volume from host
- db-backup:/var/lib/influxdb # mount named volume 'db-backup' to host's path to /var/lib/influxdb
ports: # expose of ports and publish
- "33086:8086" # map containers port 8086 to host's port 33086
- "3000:8086" # map containers port 8086 to host's port 3000
networks: # define networks connected to container 'influxdb'
proxy-redirect: # Name of the network



##### GRAFANA #####
grafana:
build: # Configuration applied at build time
context: ./grafana/ # Relative Path to grafana from this docker-compose file containing Dockerfile
args: # Args variables available only at build-time
GRAFANA_VERSION: $GRAFANA_VERSION # Variable which contains version of Grafana
GF_INSTALL_PLUGINS: $GF_INSTALL_PLUGINS # Variable which contains additional plugins to load
http_proxy: $http_proxy # Proxy url's from environment
https_proxy: $https_proxy
image: grafana:v0.0.9 # Name of the built image to be used
container_name: grafana # Name of grafana container
restart: unless-stopped # always restarts (see overview page 12 Industrial Edge Developer Guide)
mem_limit: 350m
environment: # Environment variables available at container run-time
http_proxy: $http_proxy # Proxy url's from environment ###For IEAP "192.xxx.xxx.xxx"
https_proxy: $https_proxy
logging: # allow logging
options: # we use best pactice here as limiting file size and rolling mechanism
max-size: "10m" # File size is 10MB
max-file: "2" # only 2 files created before rolling mechanism applies
ports: # expose of ports and publish
- "3000:3000" # map containers port 3000 to host's port 3000
networks: # define networks connected to container 'grafana'
proxy-redirect: # Name of the network


###### NETWORK CONFIG ######
networks: # Network interface configuration
proxy-redirect: # Reference 'proxy-redirect' as predefined network
Expand Down
59 changes: 59 additions & 0 deletions src/solution/HandsOn_1/my_edge_app/docker-compose_Edge.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
### Docker Compose File for my Industrial Edge App ###
# This docker-compose file creates a preconfigured
# * Data Analytics container based in Python with Mqtt Connection
# * InfluxDB Container for Storage of Time Series data adn visualization

version: '2.4' # docker-compose version is set to 2.4

services:

###### DATA-ANALYTICS ######
data-analytics:
image: data-analytics:v0.0.1 # Name of the built image
container_name: data-analytics # Name of the data-analytics container
mem_limit: 350m
restart: unless-stopped # always restarts (see overview page 12 Industrial Edge Developer Guide)
logging: # allow logging
options: # we use best pactice here as limiting file size and rolling mechanism
max-size: "10m" # File size is 10MB
max-file: "2" # only 2 files created before rolling mechanism applies
driver: json-file
networks: # define networks connected to container 'data-analytics'
proxy-redirect: # Name of the network

##### INFLUXDB ######
influxdb:
image: influxdb:2.4-alpine # Define image to pull from docker hub if not already on your machine available
container_name: influxdb # Name of the influx-db container
restart: unless-stopped # always restarts (see overview page 12 Industrial Edge Developer Guide)
mem_limit: 1400m
environment: # Environment variables available at container run-time
- DOCKER_INFLUXDB_INIT_MODE=setup
- DOCKER_INFLUXDB_INIT_USERNAME=edge
- DOCKER_INFLUXDB_INIT_PASSWORD=edgeadmin
- DOCKER_INFLUXDB_INIT_ORG=siemens
- DOCKER_INFLUXDB_INIT_BUCKET=edgedb
- DOCKER_INFLUXDB_INIT_RETENTION=1w
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=testtoken
logging: # allow logging
options: # we use best pactice here as limiting file size and rolling mechanism
max-size: "10m" # File size is 10MB
max-file: "2" # only 2 files created before rolling mechanism applies
driver: json-file
volumes: # mount volume from host
- db-backup:/var/lib/influxdb # mount named volume 'db-backup' to host's path to /var/lib/influxdb
ports: # expose of ports and publish
- "3000:8086" # map containers port 8086 to host's port 3000
networks: # define networks connected to container 'influxdb'
proxy-redirect: # Name of the network

###### NETWORK CONFIG ######
networks: # Network interface configuration
proxy-redirect: # Reference 'proxy-redirect' as predefined network
external: # Note: Already preexisting on Industrial Edge Device
name: proxy-redirect
driver: bridge

###### VOLUMES ######
volumes: # Volumes for containers
db-backup:
Loading