You can enable Java remote debugging for SolarNode on a node device for SolarNode plugin development
or troubleshooting by modifying the SolarNode service
environment. Once enabled,
diff --git a/developers/osgi/blueprint-compendium/index.html b/developers/osgi/blueprint-compendium/index.html
index 0260856..e97762c 100644
--- a/developers/osgi/blueprint-compendium/index.html
+++ b/developers/osgi/blueprint-compendium/index.html
@@ -174,8 +174,13 @@
The Gemini Blueprint implementation provides some useful extensions that SolarNode makes frequent use of.
To use the extensions you need to declare the Gemini Blueprint Compendium namespace in your Blueprint
XML file, like this:
Managed Properties provide a way to use the Configuration Admin service to
manage user-configurable service properties. Conceptually it is like linking a class to a set of
dynamic runtime Settings: Configuration Admin provides change event and
@@ -801,7 +811,7 @@
The Managed Service Factory service provide a way to use the Configuration
Admin service to manage multiple copies of a user-configurable
service's properties. Conceptually it is like linking a class to a set of dynamic runtime
diff --git a/developers/osgi/blueprint/index.html b/developers/osgi/blueprint/index.html
index edccd67..6273c6b 100644
--- a/developers/osgi/blueprint/index.html
+++ b/developers/osgi/blueprint/index.html
@@ -174,8 +174,13 @@
SolarNode supports the OSGi Blueprint Container Specification so plugins can declare their service
dependencies and register their services by way of an XML file deployed with the plugin. If you are
familiar with the Spring Framework's XML configuration, you will find Blueprint
@@ -784,7 +794,7 @@
Blueprint Container
Specification for full details of the specification.
Imagine you are working on a plugin and have a com.example.Greeter interface you would like to
register as a service for other plugins to use, and an implementation of that service in
com.example.HelloGreeter that relies on the Placeholder
@@ -861,11 +871,11 @@
Blueprint XML documents are added to a plugin's OSGI-INF/blueprint classpath location. A plugin
can provide any number of Blueprint XML documents there, but often a single file is sufficient and
a common convention in SolarNode is to name it module.xml.
To make use of services registered by SolarNode plugins, you declare a reference to that service
so you may refer to it elsewhere within the Blueprint XML. For example, imagine you wanted to use
the Placeholder Service in your component. You would obtain a
@@ -888,7 +898,7 @@
Components in Blueprint are Java classes you would like instantiated when your plugin starts. They are
declared using a <bean> element in Blueprint XML. You can assign each component a unique identifier using
an id attribute, and then you can refer to that component in other components.
If your component requires any constructor arguments, they can be specified with nested <argument>
elements in Blueprint. The <argument> value can be specified as a reference to another component
using a ref attribute whose value is the id of that component, or as a literal value using a
@@ -940,7 +950,7 @@
You can configure mutable class properties on a component with nested <property name=""> elements
in Blueprint. A mutable property is a Java setter method. For example an int property minimum
would be associated with a Java setter method publicvoidsetMinimum(intvalue).
Blueprint can invoke a method on your component when it has finished instantiating and configuring
the object (when the plugin starts), and another when it destroys the instance (when the plugin is
stopped). You simply provide the name of the method you would like Blueprint to call in the
@@ -961,7 +971,7 @@
You can make any component available to other plugins by registering the component with a
<service> element that declares what interface(s) your component provides. Once registered, other
plugins can make use of your component, for example by declaring a <referenece> to your component
@@ -1012,7 +1022,7 @@
You can advertise any number of service interfaces that your component supports, by nesting an <interfaces>
element within the <service> element, in place of the interface attribute. For example:
For a registered service to be of any use to another plugin, the package the service is defined in
must be exported by the plugin hosting that package. That is
because the plugin wishing to add a reference to the service will need to
diff --git a/developers/osgi/configuration-admin/index.html b/developers/osgi/configuration-admin/index.html
index 3837b48..93565d1 100644
--- a/developers/osgi/configuration-admin/index.html
+++ b/developers/osgi/configuration-admin/index.html
@@ -174,8 +174,13 @@
The SolarNode platform has been designed to be highly modular and dynamic, by using a plugin-based
architecture. The plugin system SolarNode uses is based on the OSGi specification, where
plugins are implemented as OSGi bundles. SolarNode can be thought of as a collection of
@@ -658,7 +668,7 @@
Central to the plugin architecture SolarNode uses is the concept of a service. In SolarNode a
service is defined by a Java interface. A plugin can advertise a service to the SolarNode runtime.
Plugins can lookup a service in the SolarNode runtime and then invoke the methods defined on it.
Plugins in SolarNode can be added to and removed from the platform at any time without restarting
the SolarNode process, because of the Life Cycle process OSGi manages. The life
cycle of a plugin consists of a set of states and OSGi will transition a plugin's state over the
@@ -696,7 +706,7 @@
A plugin can opt in to receiving callbacks for the start/stop state transitions by providing an
org.osgi.framework.BundleActivator implementation and declaring that class in the
Bundle-Activator manifest attribute. This can be useful when a
diff --git a/developers/osgi/manifest/index.html b/developers/osgi/manifest/index.html
index 8436b76..97819ef 100644
--- a/developers/osgi/manifest/index.html
+++ b/developers/osgi/manifest/index.html
@@ -174,8 +174,13 @@
As SolarNode plugins are OSGi bundles, which are Java JAR files, every plugin automatically includes
a META-INF/MANIFEST.MF file as defined in the Java JAR File Specification. The
MANIFEST.MF file is where OSGi metadata is included, turning the JAR into an OSGi bundle (plugin).
Some OSGi version attributes allow version ranges to be declared, such as the Import-Package
attribute. A version range is a comma-delimited lower,upper specifier. Square brackets are used to
represent inclusive values and round brackets represent exclusive values. A value can be
@@ -833,7 +843,7 @@
A plugin must declare the Java packages it directly uses in a Import-Package attribute. This
attribute accpets a comma-delimited list of package specifications that take the basic form of:
If you import a package in your plugin, any child packages that may exist are not imported as
well. You must import every individual package you need to use in your plugin.
For example to use both net.solarnetwork.service and net.solarnetwork.service.support you would
@@ -955,7 +965,7 @@
A plugin can export any package it provides, making the resources within that package available to
other plugins to import and use. Declare exoprted packages with a Export-Package attribute. This
attribute takes a comma-delimited list of versioned package specifications. Note that version
diff --git a/developers/services/backup-manager/index.html b/developers/services/backup-manager/index.html
index 8ab7688..2f32041 100644
--- a/developers/services/backup-manager/index.html
+++ b/developers/services/backup-manager/index.html
@@ -174,8 +174,13 @@
A Backup does not itself provide access to any of the resources associated with the backup.
Instead, the getBackupResources() method of BackupService returns them.
The Backup Manager supports exporting and importing specially formed .zip
archives that contain a complete Backup. These archives are a convenient way to transfer
settings from one node to another, and can be used to restore SolarNode on a new device.
The net.solarnetwork.node.backup.BackupResource API defines a unique item
within a Backup. A Backup Resource could be a file, a database table, or anything that
can be serialized to a stream of bytes. Backup Resources are both provided by, and restored with,
Backup Resource Providers so it is up to the Provider implementation to
know how to generate and then restore the Resources it manages.
The net.solarnetwork.node.backup.BackupService API defines the bulk of the
SolarNode backup system. Each implementation is identified by a unique key, typically the
fully-qualified Java class name of the implementation.
SolarNode provides the net.solarnetwork.node.backup.FileSystemBackupService default Backup Service
implementation that saves Backup Archives to the node's own file system.
The net.solarnetwork.node.backup.s3 plugin provides the
net.solarnetwork.node.backup.s3.S3BackupService Backup Service implementation that saves all
Backup data to AWS S3.
A plugin can publish a net.solarnetwork.service.CloseableService and SolarNode will invoke the
closeService() method on it when that service is destroyed. This can be useful in some situations,
to make sure resources are freed when a service is no longer needed.
The DatumDataSourcePollManagedJob class is a Job
Service implementation that can be used to let users schedule the
generation of datum from a Datum Data Source. Typically this is configured
diff --git a/developers/services/datum-data-source/index.html b/developers/services/datum-data-source/index.html
index d9307f9..f886fb0 100644
--- a/developers/services/datum-data-source/index.html
+++ b/developers/services/datum-data-source/index.html
@@ -174,8 +174,13 @@
The DatumDataSource API defines the primary way for plugins to generate datum instances
from devices or services integrated with SolarNode, through a request-based API. The MultiDatumDataSource API
is closely related, and allows a plugin to generate multiple datum when requested.
SolarNode has a DatumQueue service that acts as a central facility for processing
all NodeDatum captured by all data source plugins deployed in the SolarNode runtime.
The queue can be configured with various filters that can augment, modify, or discard the datum.
@@ -671,7 +681,7 @@
Plugins can also register observers on the DatumQueue that are notified of each datum that gets
processed. The addConsumer() and removeConsumer() methods allow you to register/deregister
observers:
Any plugin simply needs to register a ManagedJob service for the Job Scheduler to
automatically schedule and execute the job. The schedule is provided by the getSchedle()
method, which can return a cron expression or a plain number representing a millisecond period.
The net.solarnetwork.node.job.SimpleManagedJob class implements ManagedJob and can be used in
most situations. It delegates the actual work to a net.solarnetwork.node.job.JobService API,
discussed in the next section.
Let's imagine you have a com.example.Job class that you would like to allow users to schedule. Your
class would implement the JobService interface, and then you would provide a localized messages
properties file and configure the service using OSGi Blueprint.
The Placeholder Service API provides components a way to resolve variables in
strings, known as placeholders, whose values are managed outside the component itself. For example
a datum data source plugin could use the Placeholder Service to support resolving placeholders in a
@@ -694,11 +704,11 @@
Placeholder Service implementation that resolves
both dynamic placeholders from the Settings Database (using the setting namespace
placeholder), and static placeholders from a configurable file or directory location.
-
Call the resolvePlaceholders(s, parameters) method to resolve all placeholders on the String s.
The parameters argument can be used to provide additional placeholder values, or you can pass just
pass null to rely solely on the placeholders available in the service already.
Here is an imaginary class that is constructed with an optional PlaceholderService, and then when
the go() method is called uses that to resolve placeholders in the string {building}/temp and
return the result:
To use the Placeholder Service in your component, add either an Optional Service
or explicit reference to your plugin's Blueprint XML file like this
(depending on what your plugin requires):
The SolarNode runtime provides a local SQL database that is used to hold application settings, data
sampled from devices, or anything really. Some data is designed to live only in this local store
(such as settings) while other data eventually gets pushed up into the SolarNet cloud. This document
describes the most common aspects of the local database.
A standard JDBC stack is available and normal SQL queries are used to access the database.
The Hikari JDBC connection pool provides a javax.sql.DataSource for direct JDBC
access. The pool is configured by factory configuration files in the
@@ -717,7 +727,7 @@
This thread pool is configured as a fixed-size pool with the number of threads set to the number of
CPU cores detected at runtime, plus one. For example on a Raspberry Pi 4 there are 4 CPU cores so
the thread pool would be configured with 5 threads.
SolarNode provides a way for plugin components to describe their user-configurable properties,
-called settings, to the platform. SolarNode provides a web-based GUI that makes it easy for users
+called settings, to the platform. SolarNode provides a web-based UI that makes it easy for users
to configure those components using a web browser. For example, here is a screen shot of the
-SolarNode GUI showing a form for the settings of a Database Backup component:
-
+SolarNode UI showing a form for the settings of a Database Backup component:
+
The mechanism for components to describe themselves in this way is called the Settings API.
Classes that wish to participate in this system publish metadata about their configurable properties
-through the Settings Provider API, and then SolarNode generates a GUI form based on
+through the Settings Provider API, and then SolarNode generates a UI form based on
that metadata. Each form field in the previous example image is a Setting Specifier.
The process is similar to the built-in Settings app on iOS: iOS applications can publish
-configurable property definitions and the Settings app displays a GUI that allows users to modify
+configurable property definitions and the Settings app displays a UI that allows users to modify
those properties.
Sometimes you might like to expose a simple string setting but internally treat the string as a more
complex type. For example a Map could be configured using a simple delimited string like key1 =
val1, key2 = val2. For situations like this you can publish a proxy setting that manages a
diff --git a/developers/settings/resource-handler/index.html b/developers/settings/resource-handler/index.html
index f9f762a..357b7f0 100644
--- a/developers/settings/resource-handler/index.html
+++ b/developers/settings/resource-handler/index.html
@@ -174,8 +174,13 @@
Imagine a component that publishes a File setting. A typical implementation of
that component would look like this (this example omits some methods for brevity):
The TextFieldSettingSpecifier defines a simple string-based
configurable property and is the most common setting type. The setting defines a key that maps to
-a setter method on its associated component class. In the SolarNode GUI a text field is rendered as
+a setter method on its associated component class. In the SolarNode UI a text field is rendered as
an HTML form text input, like this:
The net.solarnetwork.settings.support.BasicTextFieldSettingSpecifier class provides the standard implementation
@@ -884,9 +894,9 @@
The BasicTextFieldSettingSpecifier can also be used for "secure" text fields where the field's
-content is obscured from view. In the SolarNode GUI a secure text field is rendered as an HTML
+content is obscured from view. In the SolarNode UI a secure text field is rendered as an HTML
password form input like this:
A standard secure text field setting is created by passing a third true argument, like this:
@@ -895,26 +905,26 @@
// or without any default valuenewBasicTextFieldSettingSpecifier("myProperty",null,true);
The TitleSettingSpecifier defines a simple read-only string-based
configurable property. The setting defines a key that maps to a setter method on its associated
-component class. In the SolarNode GUI the default value is rendered as plain text, like this:
+component class. In the SolarNode UI the default value is rendered as plain text, like this:
The net.solarnetwork.settings.support.BasicTitleSettingSpecifier class provides the standard
implementation of this API. A standard title setting is created like this:
newBasicTitleSettingSpecifier("status","Status is good.",true);
The TitleSettingSpecifier supports HTML markup. In the SolarNode UI the
default value is rendered directly into HTML, like this:
// pass `true` as the 4th argument to enable HTML markup in the status valuenewBasicTitleSettingSpecifier("status","Status is <b>good</b>.",true,true);
The TextAreaSettingSpecifier defines a simple string-based
configurable property for a larger text value, loaded as an external file using the
-SettingResourceHandler API. In the SolarNode GUI a text area is rendered
+SettingResourceHandler API. In the SolarNode UI a text area is rendered
as an HTML form text area with an associated button to upload the content, like this:
The net.solarnetwork.settings.support.BasicTextAreaSettingSpecifier class provides the standard implementation
@@ -924,9 +934,9 @@
// or without any default valuenewBasicTextAreaSettingSpecifier("myProperty",null);
The BasicTextAreaSettingSpecifier can also be used for "direct" text areas where the field's
-content is not uploaded as an external file. In the SolarNode GUI a direct text area is rendered as
+content is not uploaded as an external file. In the SolarNode UI a direct text area is rendered as
an HTML form text area, like this:
A standard direct text area setting is created by passing a third true argument, like this:
@@ -935,9 +945,9 @@
// or without any default valuenewBasicTextAreaSettingSpecifier("myProperty",null,true);
-
The ToggleSettingSpecifier defines a boolean configurable property. In
-the SolarNode GUI a toggle setting is rendered as an HTML form button, like this:
+the SolarNode UI a toggle setting is rendered as an HTML form button, like this:
The net.solarnetwork.settings.support.BasicToggleSettingSpecifier class provides the standard implementation
of this API. A standard toggle setting is created like this:
The SliderSettingSpecifier defines a number-based configuration property
-with minimum and maximum values enforced, and a step limit. In the SolarNode GUI a
+with minimum and maximum values enforced, and a step limit. In the SolarNode UI a
slider is rendered as an HTML widget, like this:
The net.solarnetwork.settings.support.BasicSliderSettingSpecifier class provides the standard implementation
@@ -958,9 +968,9 @@
// default value 5.0, range between 0-11 in 0.5 incrementsnewBasicSliderSettingSpecifier("volume",5.0,0.0,11.0,0.5);
-
The RadioGroupSettingSpecifier defines a configurable property that
-accepts a single value from a fixed set of possible values. In the SolarNode GUI a radio group is
+accepts a single value from a fixed set of possible values. In the SolarNode UI a radio group is
rendered as a set of HTML radio input form fields, like this:
The net.solarnetwork.settings.support.BasicRadioGroupSettingSpecifier class provides the standard implementation
@@ -975,9 +985,9 @@
The MultiValueSettingSpecifier defines a configurable property that
-accepts a single value from a fixed set of possible values. In the SolarNode GUI a multi-value
+accepts a single value from a fixed set of possible values. In the SolarNode UI a multi-value
setting is rendered as an HTML select form field, like this:
The net.solarnetwork.settings.support.BasicMultiValueSettingSpecifier class provides the standard implementation
@@ -992,9 +1002,9 @@
The FileSettingSpecifier defines a file-based resource property, loaded as
-an external file using the SettingResourceHandler API. In the SolarNode GUI a
+an external file using the SettingResourceHandler API. In the SolarNode UI a
file setting is rendered as an HTML file input, like this:
The net.solarnetwork.node.settings.support.BasicFileSettingSpecifier class provides the standard implementation
@@ -1007,9 +1017,9 @@
A Dynamic List setting allows the user to manage a list of homogeneous items, adding or subtracting items as desired.
-The items can be literals like strings, or arbitrary objects that define their own settings. In the SolarNode GUI a
+The items can be literals like strings, or arbitrary objects that define their own settings. In the SolarNode UI a
dynamic list setting is rendered as a pair of HTML buttons to remove and add items, like this:
A Dynamic List is often backed by a Java Collection or array in the associated component. In addition
@@ -1081,7 +1091,7 @@
SettingUtils.dynamicListSettingSpecifier() method simplifies the creation of a
GroupSettingSpecifier that represents a dynamic list (the examples in the
following sections demonstrate this).
-
A complex Dynamic List is a dynamic list of arbitrary object values. The main difference in terms
of the necessary settings structure required, compared to a Simple Dynamic List, is that a
group-of-groups is used.
diff --git a/images/developers/settings/web-form.png b/images/developers/settings/web-form.png
deleted file mode 100644
index a50a42a..0000000
Binary files a/images/developers/settings/web-form.png and /dev/null differ
diff --git a/images/developers/settings/web-form@2x.png b/images/developers/settings/web-form@2x.png
new file mode 100644
index 0000000..eec1b9c
Binary files /dev/null and b/images/developers/settings/web-form@2x.png differ
diff --git a/images/users/associate/invitation-complete@2x.png b/images/users/associate/invitation-complete@2x.png
index 8e7bc93..e2f1e71 100644
Binary files a/images/users/associate/invitation-complete@2x.png and b/images/users/associate/invitation-complete@2x.png differ
diff --git a/images/users/associate/invitation-form@2x.png b/images/users/associate/invitation-form@2x.png
index 4ab157c..2954c0d 100644
Binary files a/images/users/associate/invitation-form@2x.png and b/images/users/associate/invitation-form@2x.png differ
diff --git a/images/users/associate/invitation-preview@2x.png b/images/users/associate/invitation-preview@2x.png
index a15cda2..7d4f982 100644
Binary files a/images/users/associate/invitation-preview@2x.png and b/images/users/associate/invitation-preview@2x.png differ
diff --git a/images/users/associate/invitation-verify@2x.png b/images/users/associate/invitation-verify@2x.png
index 8cdae8e..ac57e1f 100644
Binary files a/images/users/associate/invitation-verify@2x.png and b/images/users/associate/invitation-verify@2x.png differ
diff --git a/images/users/component-instance-identifiers@2x.png b/images/users/component-instance-identifiers@2x.png
index b77b467..bb69f9d 100644
Binary files a/images/users/component-instance-identifiers@2x.png and b/images/users/component-instance-identifiers@2x.png differ
diff --git a/images/users/settings-io.png b/images/users/settings-io.png
deleted file mode 100644
index 836c2dd..0000000
Binary files a/images/users/settings-io.png and /dev/null differ
diff --git a/images/users/setup/settings-backup-restore-annotated@2x.png b/images/users/setup/settings-backup-restore-annotated@2x.png
new file mode 100644
index 0000000..4f0045c
Binary files /dev/null and b/images/users/setup/settings-backup-restore-annotated@2x.png differ
diff --git a/images/users/setup/settings-backup-restore@2x.png b/images/users/setup/settings-backup-restore@2x.png
new file mode 100644
index 0000000..7cae0c8
Binary files /dev/null and b/images/users/setup/settings-backup-restore@2x.png differ
diff --git a/images/users/setup/settings-changed-value@2x.png b/images/users/setup/settings-changed-value@2x.png
new file mode 100644
index 0000000..6a1f6a6
Binary files /dev/null and b/images/users/setup/settings-changed-value@2x.png differ
diff --git a/images/users/setup/settings-components-annotated@2x.png b/images/users/setup/settings-components-annotated@2x.png
new file mode 100644
index 0000000..2506bca
Binary files /dev/null and b/images/users/setup/settings-components-annotated@2x.png differ
diff --git a/images/users/setup/settings-components-save-all@2x.png b/images/users/setup/settings-components-save-all@2x.png
new file mode 100644
index 0000000..0e7664d
Binary files /dev/null and b/images/users/setup/settings-components-save-all@2x.png differ
diff --git a/images/users/setup/settings-components@2x.png b/images/users/setup/settings-components@2x.png
new file mode 100644
index 0000000..5dfd3f9
Binary files /dev/null and b/images/users/setup/settings-components@2x.png differ
diff --git a/images/users/setup/settings-file-backup@2x.png b/images/users/setup/settings-file-backup@2x.png
new file mode 100644
index 0000000..50455db
Binary files /dev/null and b/images/users/setup/settings-file-backup@2x.png differ
diff --git a/images/users/setup/settings-full-backup-restore-annotated@2x.png b/images/users/setup/settings-full-backup-restore-annotated@2x.png
new file mode 100644
index 0000000..4608ca4
Binary files /dev/null and b/images/users/setup/settings-full-backup-restore-annotated@2x.png differ
diff --git a/images/users/setup/settings-full-backup-restore@2x.png b/images/users/setup/settings-full-backup-restore@2x.png
new file mode 100644
index 0000000..70c595c
Binary files /dev/null and b/images/users/setup/settings-full-backup-restore@2x.png differ
diff --git a/images/users/setup/settings-help-tooltip@2x.png b/images/users/setup/settings-help-tooltip@2x.png
new file mode 100644
index 0000000..6468fae
Binary files /dev/null and b/images/users/setup/settings-help-tooltip@2x.png differ
diff --git a/images/users/setup/settings-manage-component-actions@2x.png b/images/users/setup/settings-manage-component-actions@2x.png
new file mode 100644
index 0000000..78b0723
Binary files /dev/null and b/images/users/setup/settings-manage-component-actions@2x.png differ
diff --git a/images/users/setup/settings-manage-component-add-form@2x.png b/images/users/setup/settings-manage-component-add-form@2x.png
new file mode 100644
index 0000000..23d1be5
Binary files /dev/null and b/images/users/setup/settings-manage-component-add-form@2x.png differ
diff --git a/images/users/setup/settings-manage-component-identifiers@2x.png b/images/users/setup/settings-manage-component-identifiers@2x.png
new file mode 100644
index 0000000..bf1c2a0
Binary files /dev/null and b/images/users/setup/settings-manage-component-identifiers@2x.png differ
diff --git a/images/users/setup/settings-manage-component-remove-all-confirm@2x.png b/images/users/setup/settings-manage-component-remove-all-confirm@2x.png
new file mode 100644
index 0000000..a8c7e68
Binary files /dev/null and b/images/users/setup/settings-manage-component-remove-all-confirm@2x.png differ
diff --git a/images/users/setup/settings-manage-component-remove-all@2x.png b/images/users/setup/settings-manage-component-remove-all@2x.png
new file mode 100644
index 0000000..cc2cd5c
Binary files /dev/null and b/images/users/setup/settings-manage-component-remove-all@2x.png differ
diff --git a/images/users/setup/settings-manage-component@2x.png b/images/users/setup/settings-manage-component@2x.png
new file mode 100644
index 0000000..94ecc69
Binary files /dev/null and b/images/users/setup/settings-manage-component@2x.png differ
diff --git a/images/users/setup/settings-s3-backup@2x.png b/images/users/setup/settings-s3-backup@2x.png
new file mode 100644
index 0000000..934fa9a
Binary files /dev/null and b/images/users/setup/settings-s3-backup@2x.png differ
diff --git a/images/users/setup/settings-services-save-all@2x.png b/images/users/setup/settings-services-save-all@2x.png
new file mode 100644
index 0000000..ed10b15
Binary files /dev/null and b/images/users/setup/settings-services-save-all@2x.png differ
diff --git a/images/users/setup/settings-services@2x.png b/images/users/setup/settings-services@2x.png
new file mode 100644
index 0000000..6b6d56c
Binary files /dev/null and b/images/users/setup/settings-services@2x.png differ
diff --git a/images/users/setup/setup-manage-component-actions@2x.png b/images/users/setup/setup-manage-component-actions@2x.png
deleted file mode 100644
index 4a0db60..0000000
Binary files a/images/users/setup/setup-manage-component-actions@2x.png and /dev/null differ
diff --git a/images/users/setup/setup-manage-component-add-form@2x.png b/images/users/setup/setup-manage-component-add-form@2x.png
deleted file mode 100644
index b3d0b14..0000000
Binary files a/images/users/setup/setup-manage-component-add-form@2x.png and /dev/null differ
diff --git a/images/users/setup/setup-manage-component-identifiers@2x.png b/images/users/setup/setup-manage-component-identifiers@2x.png
deleted file mode 100644
index 62c2d9a..0000000
Binary files a/images/users/setup/setup-manage-component-identifiers@2x.png and /dev/null differ
diff --git a/images/users/setup/setup-manage-component@2x.png b/images/users/setup/setup-manage-component@2x.png
deleted file mode 100644
index 910a9ce..0000000
Binary files a/images/users/setup/setup-manage-component@2x.png and /dev/null differ
diff --git a/images/users/setup/setup-s3-backup@2x.png b/images/users/setup/setup-s3-backup@2x.png
deleted file mode 100644
index 51ef656..0000000
Binary files a/images/users/setup/setup-s3-backup@2x.png and /dev/null differ
diff --git a/images/users/setup/setup-settings-backup@2x.png b/images/users/setup/setup-settings-backup@2x.png
deleted file mode 100644
index acc50f2..0000000
Binary files a/images/users/setup/setup-settings-backup@2x.png and /dev/null differ
diff --git a/images/users/setup/setup-settings-toolip@2x.png b/images/users/setup/setup-settings-toolip@2x.png
deleted file mode 100644
index f7a8230..0000000
Binary files a/images/users/setup/setup-settings-toolip@2x.png and /dev/null differ
diff --git a/index.html b/index.html
index 33d5f1a..ecf0890 100644
--- a/index.html
+++ b/index.html
@@ -419,6 +419,10 @@
+
+
+
+
@@ -447,8 +451,22 @@
This handbook provides guides and reference documentation about SolarNode, the distributed computing part of SolarNetwork.
SolarNode is the swiss army knife for IoT monitoring and control. It is deployed on inexpensive computers in homes, buildings, vehicles, and even EV chargers, connected to any number of sensors, meters, building automation systems, and more. There are several SolarNode icons in the image below. Can you spot them all?
You can enable Java remote debugging for SolarNode on a node device for SolarNode plugin development or troubleshooting by modifying the SolarNode service environment. Once enabled, you can use SSH port forwarding to enable Java remote debugging in your Java IDE of choice.
To enable Java remote debugging, copy the /etc/solarnode/env.conf.example file to /etc/solarnode/env.conf. The example already includes this support, using port 9142 for the debugging port. Then restart the solarnode service:
Creating a custom SolarNode environment with debugging support
Then you can use ssh from your development machine to forward a local port to the node's 9142 port, and then have your favorite IDE establish a remote debugging connection on your local port.
For example, on a Linux or macOS machine you could forward port 8000 to a node's port 9142 like this:
Creating a port-forwarding SSH connection from a development machine to SolarNode
$ ssh -L8000:localhost:9142 solar@solarnode\n
Once that ssh connection is established, your IDE can be used to connect to localhost:8000 for a remote Java debugging session.
The SolarNode platform has been designed to be highly modular and dynamic, by using a plugin-based architecture. The plugin system SolarNode uses is based on the OSGi specification, where plugins are implemented as OSGi bundles. SolarNode can be thought of as a collection of OSGi bundles that, when combined and deployed together in an OSGi framework like Eclipse Equinox, form the complete SolarNode platform.
To summarize: everything in SolarNode is a plugin!
OSGi bundles and Eclipse plug-ins
Each OSGi bundle in SolarNode comes configured as an Eclipse IDE (or simply Eclipse) plug-in project. Eclipse refers to OSGi bundles as \"plug-ins\" and its OSGi development tools are collectively known as the Plug-in Development Environment, or PDE for short. We use the terms bundle and plug-in and plugin somewhat interchangably in the SolarNode project. Although Eclipse is not actually required for SolarNode development, it is very convenient.
Practically speaking a plugin, which is an OSGi bundle, is simply a Java JAR file that includes the Java code implementing your plugin and some OSGi metadata in its Manifest. For example, here is the contents of the net.solarnetwork.common.jdt plugin JAR:
Central to the plugin architecture SolarNode uses is the concept of a service. In SolarNode a service is defined by a Java interface. A plugin can advertise a service to the SolarNode runtime. Plugins can lookup a service in the SolarNode runtime and then invoke the methods defined on it.
The advertising/lookup framework SolarNode uses is provided by OSGi. OSGi provides several ways to manage services. In SolarNode the most common is to use Blueprint XML documents to both publish services (advertise) and acquire references to services (lookup).
The Gemini Blueprint implementation provides some useful extensions that SolarNode makes frequent use of. To use the extensions you need to declare the Gemini Blueprint Compendium namespace in your Blueprint XML file, like this:
This example declares the Gemini Blueprint Compendium XML namespace prefix osgix and a related Spring Beans namespace prefix beans. You will see those used throughout SolarNode.
Managed Properties provide a way to use the Configuration Admin service to manage user-configurable service properties. Conceptually it is like linking a class to a set of dynamic runtime Settings: Configuration Admin provides change event and persistence APIs for the settings, and the Managed Properties applies those settings to the linked service.
Imagine you have a service class MyService with a configurable property level. We can make that property a managed, persistable setting by adding a <osgix:managed-properties> element to our Blueprint XML, like this:
MyService classMyService localizationBlueprint XML
package com.example;\n\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.Map;\nimport net.solarnetwork.node.service.support.BaseIdentifiable;\nimport net.solarnetwork.settings.SettingSpecifier;\nimport net.solarnetwork.settings.SettingSpecifierProvider;\nimport net.solarnetwork.settings.SettingsChangeObserver;\nimport net.solarnetwork.settings.support.BasicTextFieldSettingSpecifier;\n\n/**\n * My super-duper service.\n *\n * @author matt\n * @version 1.0\n */\npublic class MyService extends BaseIdentifiable\nimplements SettingsChangeObserver, SettingSpecifierProvider {\n\nprivate int level;\n\n@Override\npublic String getSettingUid() {\nreturn \"com.example.MyService\"; // (1)!\n}\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nreturn Collections.singletonList(\nnew BasicTextFieldSettingSpecifier(\"level\", String.valueOf(0)));\n}\n\n@Override\npublic void configurationChanged(Map<String, Object> properties) {\n// the settings have changed; do something\n}\n\npublic int getLevel() {\nreturn level;\n}\n\npublic void setLevel(int level) {\nthis.level = level;\n}\n\n}\n
The setting UID will be the Configuration Admin PID
title = Super-duper Service\ndesc = This service does it all.\n\nlevel.key = Level\nlevel.desc = This one goes to 11.\n
You nest the <osgi:managed-properties> element within the actual service <bean> element you want to apply the managed settings on.
note how the persistent-id attribute value matches the getSettingsUid() value in MyService.java
the autowire-on-update method toggles having the Managed Properties automatically applied by Gemini Blueprint; you can set to false and provide an update-method if you want to handle changes yourself
the update-method attribute is optional; it provides a way for the service to be notified after the Configuration Admin settings have been applied.
When this plugin is deployed in SolarNode, the component will appear on the main Settings page and offer a configurable Level setting, like this:
"},{"location":"developers/osgi/blueprint-compendium/#managed-service-factory","title":"Managed Service Factory","text":"
The Managed Service Factory service provide a way to use the Configuration Admin service to manage multiple copies of a user-configurable service's properties. Conceptually it is like linking a class to a set of dynamic runtime Settings, but you can create as many independent copies as you like. Configuration Admin provides change event and persistence APIs for the settings, and the Managed Service Factory applies those settings to each linked service instance.
Imagine you have a service class ManagedService with a configurable property level. We can make that property a factory of managed, persistable settings by adding a <osgix:managed-service-factory> element to our Blueprint XML, like this:
MyService classMyService localizationBlueprint XML
package com.example;\n\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.Map;\nimport net.solarnetwork.node.service.support.BaseIdentifiable;\nimport net.solarnetwork.settings.SettingSpecifier;\nimport net.solarnetwork.settings.SettingSpecifierProvider;\nimport net.solarnetwork.settings.SettingsChangeObserver;\nimport net.solarnetwork.settings.support.BasicTextFieldSettingSpecifier;\n\n/**\n * My super-duper managed service.\n *\n * @author matt\n * @version 1.0\n */\npublic class ManagedService extends BaseIdentifiable\nimplements SettingsChangeObserver, SettingSpecifierProvider {\n\nprivate int level;\n\n@Override\npublic String getSettingUid() {\nreturn \"com.example.ManagedService\"; // (1)!\n}\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nreturn Collections.singletonList(\nnew BasicTextFieldSettingSpecifier(\"level\", String.valueOf(0)));\n}\n\n@Override\npublic void configurationChanged(Map<String, Object> properties) {\n// the settings have changed; do something\n}\n\npublic int getLevel() {\nreturn level;\n}\n\npublic void setLevel(int level) {\nthis.level = level;\n}\n\n}\n
The setting UID will be the Configuration Admin factory PID
title = Super-duper Managed Service\ndesc = This managed service does it all.\n\nlevel.key = Level\nlevel.desc = This one goes to 11.\n
The SettingSpecifierProviderFactory service is what makes the managed service factory appear as a component in the SolarNode Settings UI.
The factoryUid defines the Configuration Admin factory PID and the Settings UID.
You add a <osgix:managed-service-factory> element in your Blueprint XML, with a nested <bean> \"template\" within it. The template bean will be instantiated for each service instance instantiated by the Managed Service Factory.
note how the factory-pid attribute value matches the getSettingsUid() value in ManagedService.java and the factoryUid declared in #2.
the autowire-on-update method toggles having the Managed Properties automatically applied by Gemini Blueprint; you can set to false and provide an update-method if you want to handle changes yourself
the update-method attribute is optional; it provides a way for the service to be notified after the Configuration Admin settings have been applied.
When this plugin is deployed in SolarNode, the managed component will appear on the main Settings page like this:
After clicking on the Manage button next to this component, the Settings UI allows you to create any number of instances of the component, each with their own setting values. Here is a screen shot showing two instances having been created:
SolarNode supports the OSGi Blueprint Container Specification so plugins can declare their service dependencies and register their services by way of an XML file deployed with the plugin. If you are familiar with the Spring Framework's XML configuration, you will find Blueprint very similar. SolarNode uses the Eclipse Gemini implementation of the Blueprint specification, which is directly derived from Spring Framework.
Note
This guide will not document the full Blueprint XML syntax. Rather, it will attempt to showcase the most common parts used in SolarNode. Refer to the Blueprint Container Specification for full details of the specification.
Imagine you are working on a plugin and have a com.example.Greeter interface you would like to register as a service for other plugins to use, and an implementation of that service in com.example.HelloGreeter that relies on the Placeholder Service provided by SolarNode:
Greeter serviceHelloGreeter implementation
package com.example;\npublic interface Greeter {\n\n/**\n * Greet something with a given name.\n * @param name the name to greet\n * @return the greeting\n */\nString greet(String name);\n\n}\n
package com.example;\nimport net.solarnetwork.node.service.PlaceholderService;\npublic class HelloGreeter implements Greeter {\n\nprivate final PlaceholderService placeholderService;\n\npublic HelloGreeter(PlaceholderService placeholderService) {\nsuper();\nthis.placeholderService = placeholderService;\n}\n\n@Override\npublic String greet(String name) {\nreturn placeholderService.resolvePlaceholders(\nString.format(\"Hello %s, from {myName}.\", name),\nnull);\n}\n}\n
Assuming the PlaceholderService will resolve {name} to Office Node, we would expect the greet() method to run like this:
Greeter greeter = resolveGreeterService();\nString result = greeter.greet(\"Joe\");\n// result is \"Hello Joe, from Office Node.\"\n
In the plugin we then need to:
Obtain a net.solarnetwork.node.service.PlaceholderService to pass to the HelloGreeter(PlaceholderService) constructor
Register the HelloGreeter comopnent as a com.example.Greeter service in the SolarNode platform
Here is an example Blueprint XML document that does both:
Blueprint XML example
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\nxmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\nxsi:schemaLocation=\"\n http://www.osgi.org/xmlns/blueprint/v1.0.0\n https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\">\n\n<!-- Declare a reference (lookup) to the PlaceholderService -->\n<reference id=\"placeholderService\"\ninterface=\"net.solarnetwork.node.service.PlaceholderService\"/>\n\n<service interface=\"com.example.Greeter\">\n<bean class=\"com.example.HelloGreeter\">\n<argument ref=\"placeholderService\">\n</bean>\n</service>\n\n</blueprint>\n
"},{"location":"developers/osgi/blueprint/#blueprint-xml-resources","title":"Blueprint XML Resources","text":"
Blueprint XML documents are added to a plugin's OSGI-INF/blueprint classpath location. A plugin can provide any number of Blueprint XML documents there, but often a single file is sufficient and a common convention in SolarNode is to name it module.xml.
To make use of services registered by SolarNode plugins, you declare a reference to that service so you may refer to it elsewhere within the Blueprint XML. For example, imagine you wanted to use the Placeholder Service in your component. You would obtain a reference to that like this:
The id attribute allows you to refer to this service elsewhere in your Blueprint XML, while interface declares the fully-qualified Java interface of the service you want to use.
Components in Blueprint are Java classes you would like instantiated when your plugin starts. They are declared using a <bean> element in Blueprint XML. You can assign each component a unique identifier using an id attribute, and then you can refer to that component in other components.
Imagine an example component class com.example.MyComponent:
package com.example;\n\nimport net.solarnetwork.node.service.PlaceholderService;\n\npublic class MyComponent {\n\nprivate final PlaceholderService placeholderService;\nprivate int minimum;\n\npublic MyComponent(PlaceholderService placeholderService) {\nsuper();\nthis.placeholderService = placeholderService;\n}\n\npublic String go() {\nreturn PlaceholderService.resolvePlaceholders(placeholderService,\n\"{building}/temp\", null);\n}\n\npublic int getMinimum() {\nreturn minimum;\n}\n\npublic void setMinimum(int minimum) {\nthis.minimum = minimum;\n}\n}\n
Here is how that component could be declared in Blueprint:
If your component requires any constructor arguments, they can be specified with nested <argument> elements in Blueprint. The <argument> value can be specified as a reference to another component using a ref attribute whose value is the id of that component, or as a literal value using a value attribute.
You can configure mutable class properties on a component with nested <property name=\"\"> elements in Blueprint. A mutable property is a Java setter method. For example an int property minimum would be associated with a Java setter method public void setMinimum(int value).
The <property> value can be specified as a reference to another component using a ref attribute whose value is the id of that component, or as a literal value using a value attribute.
Blueprint can invoke a method on your component when it has finished instantiating and configuring the object (when the plugin starts), and another when it destroys the instance (when the plugin is stopped). You simply provide the name of the method you would like Blueprint to call in the init-method and destroy-method attributes of the <bean> element. For example:
You can make any component available to other plugins by registering the component with a <service> element that declares what interface(s) your component provides. Once registered, other plugins can make use of your component, for example by declaring a <referenece> to your component class in their Blueprint XML.
Note
You can only register Java interfaces as services, not classes.
For example, imagine a com.example.Startable interface like this:
We can register MyComponent as a Startable service using a <service> element like this in Blueprint:
Direct service componentIndirect service component
<service interface=\"com.example.Startable\">\n<!-- The service implementation is nested directly within -->\n<bean class=\"com.example.MyComponent\"/>\n</service>\n
<!-- The service implementation is referenced indirectly... -->\n<service ref=\"myComponent\" interface=\"com.example.Startable\"/>\n\n<!-- ... to a bean with a matching id attribute -->\n<bean id=\"myComponent\" class=\"com.example.MyComponent\"/>\n
"},{"location":"developers/osgi/blueprint/#multiple-service-interfaces","title":"Multiple Service Interfaces","text":"
You can advertise any number of service interfaces that your component supports, by nesting an <interfaces> element within the <service> element, in place of the interface attribute. For example:
"},{"location":"developers/osgi/blueprint/#export-service-packages","title":"Export service packages","text":"
For a registered service to be of any use to another plugin, the package the service is defined in must be exported by the plugin hosting that package. That is because the plugin wishing to add a reference to the service will need to import the package in order to use it.
For example, the plugin that hosts the com.example.service.MyService service would need a manifest file that includes an Export-Package attribute similar to:
Plugins in SolarNode can be added to and removed from the platform at any time without restarting the SolarNode process, because of the Life Cycle process OSGi manages. The life cycle of a plugin consists of a set of states and OSGi will transition a plugin's state over the course of the plugin's life.
The available plugin states are:
State Description INSTALLED The plugin has been successfully added to the OSGi framework. RESOLVED All package dependencies that the bundle needs are available. This state indicates that the plugin is either ready to be started or has stopped. STARTING The plugin is being started by the OSGi framework, but it has not finished starting yet. ACTIVE The plugin has been successfully started and is running. STOPPING The plugin is being stopped by the OSGi framework, but it has not finished stopping yet. UNINSTALLED The plugin has been removed by the OSGi framework. It cannot change to another state.
The possible changes in state can be visualized in the following state-change diagram:
A plugin can opt in to receiving callbacks for the start/stop state transitions by providing an org.osgi.framework.BundleActivator implementation and declaring that class in the Bundle-Activator manifest attribute. This can be useful when a plugin needs to initialize some resources when the plugin is started, and then release those resources when the plugin is stopped.
BundleActivator APIBundleActivator implementation exampleManifest declaration example
public interface BundleActivator {\n/**\n * Called when this bundle is started so the Framework can perform the\n * bundle-specific activities necessary to start this bundle.\n *\n * @param context The execution context of the bundle being started.\n */\npublic void start(BundleContext context) throws Exception;\n\n/**\n * Called when this bundle is stopped so the Framework can perform the\n * bundle-specific activities necessary to stop the bundle.\n *\n * @param context The execution context of the bundle being stopped.\n */\npublic void stop(BundleContext context) throws Exception;\n}\n
As SolarNode plugins are OSGi bundles, which are Java JAR files, every plugin automatically includes a META-INF/MANIFEST.MF file as defined in the Java JAR File Specification. The MANIFEST.MF file is where OSGi metadata is included, turning the JAR into an OSGi bundle (plugin).
In OSGi plugins are always versioned and and Java packages may be versioned. Versions follow Semantic Versioning rules, generally using this syntax:
major.minor.patch\n
In the manifest example you can see the plugin version 3.0.0 declared in the Bundle-Version attribute:
Bundle-Version: 3.0.0\n
The example also declares (exports) a net.solarnetwork.common.jdt package for other plugins to import (use) as version 2.0.0, in the Export-Package attribute:
The example also uses (imports) a versioned package net.solarnetwork.service using a version range greater than or equal to 1.0 and less than 2.0 and an unversioned package org.eclipse.jdt.core.compiler, in the Import-Package attribute:
Some OSGi version attributes allow version ranges to be declared, such as the Import-Package attribute. A version range is a comma-delimited lower,upper specifier. Square brackets are used to represent inclusive values and round brackets represent exclusive values. A value can be omitted to reprsent an unbounded value. Here are some examples:
Range Logic Description [1.0,2.0) 1.0.0 \u2264 x < 2.0.0 Greater than or equal to 1.0.0 and less than 2.0.0(1,3) 1.0.0 < x < 3.0.0 Greater than 1.0.0 and less than 3.0.0[1.3.2,) 1.3.2 \u2264 x Greater than or eequal to 1.3.21.3.2 1.3.2 \u2264 x Greater than or eequal to 1.3.2 (shorthand notation)
Implied unbounded range
An inclusive lower, unbounded upper range can be specifeid using a shorthand notation of just the lower bound, like 1.3.2.
Each plugin must provide the following attributes:
Attribute Example Description Bundle-ManifestVersion 2 declares the OSGi bundle manifest version; always 2Bundle-Name Awesome Data Source a concise human-readable name for the plugin Bundle-SymbolicName com.example.awesome a machine-readable, universally unique identifier for the plugin Bundle-Version 1.0.0 the plugin version Bundle-RequiredExecutionEnvironment JavaSE-1.8 a required OSGi execution environment"},{"location":"developers/osgi/manifest/#recommended-attributes","title":"Recommended attributes","text":"
Each plugin is recommended to provide the following attributes:
Attribute Example Description Bundle-Description An awesome data source that collects awesome data. a longer human-readable description of the plugin Bundle-Vendor ACME Corp the name of the entity or organisation that authored the plugin"},{"location":"developers/osgi/manifest/#common-attributes","title":"Common attributes","text":"
Other common manifest attributes are:
Attribute Example Description Bundle-Activator com.example.awesome.Activator a fully-qualified Java class name that implements the org.osgi.framework.BundleActivator interface, to handle plugin lifecycle events; see Activator for more information Export-Package net.solarnetwork.common.jdt;version=\"2.0.0\" a package export list Import-Package net.solarnetwork.service;version=\"[1.0,2.0)\" a package dependency list"},{"location":"developers/osgi/manifest/#package-dependencies","title":"Package dependencies","text":"
A plugin must declare the Java packages it directly uses in a Import-Package attribute. This attribute accpets a comma-delimited list of package specifications that take the basic form of:
PACKAGE;version=\"VERSION\"\n
For example here is how the net.solarnetwork.service package, versioned between 1.0 and 2.0, would be declared:
Direct package use means your plugin has code that imports a class from a given package. Classes in an imported package may import other packages indirectly; you do not need to import those packages as well. For example if you have code like this:
Then you will need to import the net.solarnetwork.service package.
Note
The SolarNode platform automatically imports core Java packages like java.* so you do not need to declare those.
Also note that in some scenarios a package used by a class in an imported package becomes a direct dependency. For example when you extend a class from an imported package and that class imports other packages. Those other packages may become direct dependencies that you also need to import.
If you import a package in your plugin, any child packages that may exist are not imported as well. You must import every individual package you need to use in your plugin.
For example to use both net.solarnetwork.service and net.solarnetwork.service.support you would have an Import-Package attribute like this:
A plugin can export any package it provides, making the resources within that package available to other plugins to import and use. Declare exoprted packages with a Export-Package attribute. This attribute takes a comma-delimited list of versioned package specifications. Note that version ranges are not supported: you must declare the exact version of the package you are exporting. For example:
Exported packages should not be confused with services. Exported packages give other plugins access to the classes and any other resources within those packages, but do not provide services to the platform. You can use Blueprint to register services. Keep in mind that any service a plugin registers must exist within an exported package to be of any use.
The net.solarnetwork.node.backup.BackupManager API provides SolarNode with a modular backup system composed of Backup Services that provide storage for backup data and Backup Resource Providers that contribute data to be backed up and support restoring backed up data.
The Backup Manager coordinates the creation and restoration of backups, delegating most of its functionality to the active Backup Service. The active Backup Service can be controlled through configuration.
The Backup Manager also supports exporting and importing Backup Archives, which are just .zip archives using a defined folder structure to preserve all backup resources within a single backup.
This design of the SolarNode backup system makes it easy for SolarNode plugins to contribute resources to backups, without needing to know where or how the backup data is ultimately stored.
What goes in a Backup?
In SolarNode a Backup will contain all the critical settings that are unique to that node, such as:
The Backup Manager can be configured under the net.solarnetwork.node.backup.DefaultBackupManager configuration namespace:
Key Default Description backupRestoreDelaySeconds 15 A number of seconds to delay the attempt of restoring a backup, when a backup has been previously marked for restoration. This delay gives the platform time to boot up and register the backup resource providers and other services required to perform the restore. preferredBackupServiceKey net.solarnetwork.node.backup.FileSystemBackupService The key of the preferred (active) Backup Service to use."},{"location":"developers/services/backup-manager/#backup","title":"Backup","text":"
The net.solarnetwork.node.backup.Backup API defines a unique backup, created by a Backup Service. Backups are uniquely identified with a unique key assigned by the Backup Service that creates them.
A Backup does not itself provide access to any of the resources associated with the backup. Instead, the getBackupResources() method of BackupService returns them.
The Backup Manager supports exporting and importing specially formed .zip archives that contain a complete Backup. These archives are a convenient way to transfer settings from one node to another, and can be used to restore SolarNode on a new device.
The net.solarnetwork.node.backup.BackupResource API defines a unique item within a Backup. A Backup Resource could be a file, a database table, or anything that can be serialized to a stream of bytes. Backup Resources are both provided by, and restored with, Backup Resource Providers so it is up to the Provider implementation to know how to generate and then restore the Resources it manages.
The net.solarnetwork.node.backup.BackupResourceProvider API defines a service that can both generate and restore Backup Resources. Each implementation is identified by a unique key, typically the fully-qualified Java class name of the implementation.
When a Backup is created, all Backup Resource Provider services registered in SolarNode will be asked to contribute their Backup Resources, using the getBackupResources() method.
When a Backup is restored, Backup Resources will be passed to their associated Provider with the restoreBackupResource(BackupResource) method.
The net.solarnetwork.node.backup.BackupService API defines the bulk of the SolarNode backup system. Each implementation is identified by a unique key, typically the fully-qualified Java class name of the implementation.
To create a Backup, use the performBackup(Iterable<BackupResource>) method, passing in the collection of Backup Resources to include.
To list the available Backups, use the getAvailableBackups(Backup) method.
To view a single Backup, use the backupForKey(String) method.
To list the resources in a Backup, use the getBackupResources(Backup)method.
SolarNode provides the net.solarnetwork.node.backup.FileSystemBackupService default Backup Service implementation that saves Backup Archives to the node's own file system.
The net.solarnetwork.node.backup.s3 plugin provides the net.solarnetwork.node.backup.s3.S3BackupService Backup Service implementation that saves all Backup data to AWS S3.
A plugin can publish a net.solarnetwork.service.CloseableService and SolarNode will invoke the closeService() method on it when that service is destroyed. This can be useful in some situations, to make sure resources are freed when a service is no longer needed.
Blueprint does provide the destroy-method stop hook that can be used in many situations, however Blueprint does not allow this in all cases. For example a <bean> nested within a <service> element does not allow a destroy-method:
<service interface=\"com.example.MyService\">\n<!-- destroy-method not allowed here: -->\n<bean class=\"com.example.MyComponent\"/>\n</service>\n
If MyComponent also implemented CloseableService then we can achieve the desired stop hook like this:
Note that the above example CloseableService is not strictly needed, as the same effect could be acheived by un-nesting the <bean> from the <service> element, like this:
There are situations where un-nesting is not possible, which is where CloseableService can be helpful.
"},{"location":"developers/services/datum-data-source-poll-job/","title":"Datum Data Source Poll Job","text":"
The DatumDataSourcePollManagedJob class is a Job Service implementation that can be used to let users schedule the generation of datum from a Datum Data Source. Typically this is configured as a Managed Service Factory so users can configure any number of job instances, each with their own settings.
Here is a typical example of a DatumDataSourcePollManagedJob, in a fictional MyDatumDataSource:
MyDatumDataSource.javaMyDatumDataSource.properties LocalizationBlueprint XML
title = Super-duper Datum Data Source\ndesc = This managed datum data source does it all.\n\nschedule.key = Schedule\nschedule.desc = The schedule to execute the job at. \\\nCan be either a number representing a frequency in <b>milliseconds</b> \\\nor a <a href=\"{0}\">cron expression</a>, for example <code>0 * * * * *</code>.\n\nsourceId.key = Source ID\nsourceId.desc = The source ID to use.\n\nlevel.key = Level\nlevel.desc = This one goes to 11.\n
The factoryUid is the same value as the getSettingsUid() value in MyDatumDataSource.java
Hiding down here is our actual data source!
Adding a service provider configuration is optional, but registers our data source as an OSGi service, in addition to the ManagedJob that the Managed Service Factory registers.
When this plugin is deployed in SolarNode, the managed component will appear on the main Settings page and then the component settings UI will look like this:
"},{"location":"developers/services/datum-data-source/","title":"Datum Data Source","text":"
The DatumDataSource API defines the primary way for plugins to generate datum instances from devices or services integrated with SolarNode, through a request-based API. The MultiDatumDataSource API is closely related, and allows a plugin to generate multiple datum when requested.
DatumDataSourceMultiDatumDataSource
package net.solarnetwork.node.service;\n\nimport net.solarnetwork.node.domain.datum.NodeDatum;\nimport net.solarnetwork.service.Identifiable;\n\n/**\n * API for collecting {@link NodeDatum} objects from some device.\n */\npublic interface DatumDataSource extends Identifiable, DeviceInfoProvider {\n\n/**\n * Get the class supported by this DataSource.\n *\n * @return class\n */\nClass<? extends NodeDatum> getDatumType();\n\n/**\n * Read the current value from the data source, returning as an unpersisted\n * {@link NodeDatum} object.\n *\n * @return Datum\n */\nNodeDatum readCurrentDatum();\n\n}\n
package net.solarnetwork.node.service;\n\nimport java.util.Collection;\nimport net.solarnetwork.node.domain.datum.NodeDatum;\nimport net.solarnetwork.service.Identifiable;\n\n/**\n * API for collecting multiple {@link NodeDatum} objects from some device.\n */\npublic interface MultiDatumDataSource extends Identifiable, DeviceInfoProvider {\n\n/**\n * Get the class supported by this DataSource.\n *\n * @return class\n */\nClass<? extends NodeDatum> getMultiDatumType();\n\n/**\n * Read multiple values from the data source, returning as a collection of\n * unpersisted {@link NodeDatum} objects.\n *\n * @return Datum\n */\nCollection<NodeDatum> readMultipleDatum();\n\n}\n
The Datum Data Source Poll Job provides a way to let users schedule the polling for datum from a data source.
SolarNode has a DatumQueue service that acts as a central facility for processing all NodeDatum captured by all data source plugins deployed in the SolarNode runtime. The queue can be configured with various filters that can augment, modify, or discard the datum. The queue buffers the datum for a short amount of time and then processes them sequentially in order of time, oldest to newest.
Datum data sources that use the Datum Data Source Poll Job are polled for datum on a recurring schedule and those datum are then posted to and stored in SolarNetwork. Data sources can also offer datum directly to the DatumQueue if they emit datum based on external events. When offering datum directly, the datum can be tagged as transient and they will then still be processed by the queue but will not be posted/stored in SolarNetwork.
/**\n * Offer a new datum to the queue, optionally persisting.\n *\n * @param datum\n * the datum to offer\n * @param persist\n * {@literal true} to persist, or {@literal false} to only pass to\n * consumers\n * @return {@literal true} if the datum was accepted\n */\nboolean offer(NodeDatum datum, boolean persist);\n
Plugins can also register observers on the DatumQueue that are notified of each datum that gets processed. The addConsumer() and removeConsumer() methods allow you to register/deregister observers:
/**\n * Register a consumer to receive processed datum.\n *\n * @param consumer\n * the consumer to register\n */\nvoid addConsumer(Consumer<NodeDatum> consumer);\n\n/**\n * De-register a previously registered consumer.\n *\n * @param consumer\n * the consumer to remove\n */\nvoid removeConsumer(Consumer<NodeDatum> consumer);\n
Each observer will receive all datum, including transient datum. An example plugin that makes use of this feature is the SolarFlux Upload Service, which posts a copy of each datum to a MQTT server.
Here is a screen shot of the datum queue settings available in the SolarNode UI:
SolarNode provides a ManagedJobScheduler service that can automatically execute jobs exported by plugins that have user-defined schedules.
The Job Scheduler uses the Task Scheduler
The Job Scheduler service uses the Task Scheduler internally, which means the number of jobs that can execute simultaneously will be limited by its thread pool configuration.
Any plugin simply needs to register a ManagedJob service for the Job Scheduler to automatically schedule and execute the job. The schedule is provided by the getSchedle() method, which can return a cron expression or a plain number representing a millisecond period.
The net.solarnetwork.node.job.SimpleManagedJob class implements ManagedJob and can be used in most situations. It delegates the actual work to a net.solarnetwork.node.job.JobService API, discussed in the next section.
Let's imagine you have a com.example.Job class that you would like to allow users to schedule. Your class would implement the JobService interface, and then you would provide a localized messages properties file and configure the service using OSGi Blueprint.
package com.example;\n\nimport java.util.Collections;\nimport java.util.List;\nimport net.solarnetwork.node.job.JobService;\nimport net.solarnetwork.node.service.support.BaseIdentifiable;\nimport net.solarnetwork.settings.SettingSpecifier;\n\n/**\n * My super-duper job.\n */\npublic class Job exetnds BaseIdentifiable implements JobService {\n@Override\npublic String getSettingUid() {\nreturn \"com.example.job\"; // (1)!\n}\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nreturn Collections.emptyList(); // (2)!\n}\n\n@Override\npublic void executeJobService() throws Exception {\n// do great stuff here!\n}\n}\n
The setting UID will be configured in the Blueprint XML as well.
The SimpleManagedJob class we'll configure in Blueprint XML will automatically add a schedule setting to configure the job schedule.
title = Super-duper Job\ndesc = This job does it all.\n\nschedule.key = Schedule\nschedule.desc = The schedule to execute the job at. \\\nCan be either a number representing a frequency in <b>milliseconds</b> \\\nor a <a href=\"{0}\">cron expression</a>, for example <code>0 * * * * *</code>.\n
The Placeholder Service API provides components a way to resolve variables in strings, known as placeholders, whose values are managed outside the component itself. For example a datum data source plugin could use the Placeholder Service to support resolving placeholders in a configurable Source ID property.
SolarNode provides a Placeholder Service implementation that resolves both dynamic placeholders from the Settings Database (using the setting namespace placeholder), and static placeholders from a configurable file or directory location.
Call the resolvePlaceholders(s, parameters) method to resolve all placeholders on the String s. The parameters argument can be used to provide additional placeholder values, or you can pass just pass null to rely solely on the placeholders available in the service already.
Here is an imaginary class that is constructed with an optional PlaceholderService, and then when the go() method is called uses that to resolve placeholders in the string {building}/temp and return the result:
package com.example;\n\nimport net.solarnetwork.node.service.PlaceholderService;\nimport net.solarnetwork.service.OptionalService;\n\npublic class MyComponent {\n\nprivate final OptionalService<PlaceholderService> placeholderService;\n\npublic MyComponent(OptionalService<PlaceholderService> placeholderService) {\nsuper();\nthis.placeholderService = placeholderService;\n}\n\npublic String go() {\nreturn PlaceholderService.resolvePlaceholders(placeholderService,\n\"{building}/temp\", null);\n}\n}\n
To use the Placeholder Service in your component, add either an Optional Service or explicit reference to your plugin's Blueprint XML file like this (depending on what your plugin requires):
The Placeholder Service supports the following configuration properties in the net.solarnetwork.node.core namespace:
Property Default Description placeholders.dir ${CONF_DIR}/placeholders.d Path to a single propertites file or to a directory of properties files to load as static placeholder parameter values when SolarNode starts up."},{"location":"developers/services/settings-db/","title":"Settings Database","text":""},{"location":"developers/services/settings-service/","title":"Settings Service","text":"
The SolarNode runtime provides a local SQL database that is used to hold application settings, data sampled from devices, or anything really. Some data is designed to live only in this local store (such as settings) while other data eventually gets pushed up into the SolarNet cloud. This document describes the most common aspects of the local database.
The database is provided by either the H2 or Apache Derby embedded SQL database engine.
Note
In SolarNodeOS the solarnode-app-db-h2 and solarnode-app-db-derby packages provide the H2 and Derby database implementations. Most modern SolarNode deployments use H2.
Typically the database is configured to run entirely within RAM on devices that support it, and the RAM copy is periodically synced to non-volatile media so if the device restarts the persisted copy of the database can be loaded back into RAM. This pattern works well because:
Non-volatile media access can be slow (e.g. flash memory)
Non-volatile media can wear out over time from many writes (e.g. flash memory)
Aside from settings, which change infrequently, most data stays locally only a short time before getting pushed into the SolarNet cloud.
A standard JDBC stack is available and normal SQL queries are used to access the database. The Hikari JDBC connection pool provides a javax.sql.DataSource for direct JDBC access. The pool is configured by factory configuration files in the net.solarnetwork.jdbc.pool.hikari namespace. See the net.solarnetwork.jdbc.pool.hikari-solarnode.cfg as an example.
To make use of the DataSource from a plugin using OSGi Blueprint you can declare a reference like this:
To support asynchronous task execution, SolarNode makes several thread-pool based services available to plugins:
A java.util.concurrent.Executor service for standard Runnable task execution
A Spring TaskExecutor service for Runnable task execution
A Spring AsyncTaskExecutor service for both Runnable and Callable task execution
A Spring AsyncListenableTaskExecutor service for both Runnable and Callable task execution that supports the org.springframework.util.concurrent.ListenableFuture API
Need to schedule tasks?
See the Task Scheduler page for information on scheduling simple tasks, or the Job Scheduler page for information on scheduling managed jobs.
To make use of any of these services from a plugin using OSGi Blueprint you can declare a reference to them like this:
"},{"location":"developers/services/task-executor/#thread-pool-configuration","title":"Thread pool configuration","text":"
This thread pool is configured as a fixed-size pool with the number of threads set to the number of CPU cores detected at runtime, plus one. For example on a Raspberry Pi 4 there are 4 CPU cores so the thread pool would be configured with 5 threads.
The Task Scheduler supports the following configuration properties in the net.solarnetwork.node.core namespace:
Property Default Description jobScheduler.poolSize 10 The number of threads to maintain in the job scheduler, and thus the maximum number of jobs that can run simultaneously. Must be set to 1 or higher. scheduler.startupDelay 180 A delay in seconds after creating the job scheduler to start triggering jobs. This can be useful to give the application time to completely initialize before starting to run jobs.
For example, to change the thread pool size to 20 and shorten the startup delay to 30 seconds, create an /etc/solarnode/services/net.solarnetwork.node.core.cfg file with the following content:
SolarNode provides a way for plugin components to describe their user-configurable properties, called settings, to the platform. SolarNode provides a web-based GUI that makes it easy for users to configure those components using a web browser. For example, here is a screen shot of the SolarNode GUI showing a form for the settings of a Database Backup component:
The mechanism for components to describe themselves in this way is called the Settings API. Classes that wish to participate in this system publish metadata about their configurable properties through the Settings Provider API, and then SolarNode generates a GUI form based on that metadata. Each form field in the previous example image is a Setting Specifier.
The process is similar to the built-in Settings app on iOS: iOS applications can publish configurable property definitions and the Settings app displays a GUI that allows users to modify those properties.
The net.solarnetwork.settings.SettingSpecifierProvider interface defines the way a class can declare themselves as a user-configurable component. The main elements of this API are:
public interface SettingSpecifierProvider {\n\n/**\n * Get a unique, application-wide setting ID.\n *\n * @return unique ID\n */\nString getSettingUid();\n\n/**\n * Get a non-localized display name.\n *\n * @return non-localized display name\n */\nString getDisplayName();\n\n/**\n * Get a list of {@link SettingSpecifier} instances.\n *\n * @return list of {@link SettingSpecifier}\n */\nList<SettingSpecifier> getSettingSpecifiers();\n\n}\n
The getSettingUid() method defines a unique ID for the configurable component. By convention the class or package name of the component (or a derivative of it) is used as the ID.
The getSettingSpecifiers() method returns a list of all the configurable properties of the component, as a list of Setting Specifier instances.
@Override\nprivate String username;\n\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>(1);\n\n// expose a \"username\" setting with a default value of \"admin\"\nresults.add(new BasicTextFieldSettingSpecifier(\"username\", \"admin\"));\n\nreturn results;\n}\n\n// settings are updated at runtime via standard setter methods\npublic void setUsername(String username) {\nthis.username = username;\n}\n
Setting values are treated as strings within the Settings API, but the methods associated with settings can accept any primitive or standard number type like int or Integer as well.
BigDecimal setting example
@Override\nprivate BigDecimal num;\n\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>(1);\n\nresults.add(new BasicTextFieldSettingSpecifier(\"num\", null));\n\nreturn results;\n}\n\n// settings will be coerced from strings into basic types automatically\npublic void setNum(BigDecimal num) {\nthis.num = num;\n}\n
Sometimes you might like to expose a simple string setting but internally treat the string as a more complex type. For example a Map could be configured using a simple delimited string like key1 = val1, key2 = val2. For situations like this you can publish a proxy setting that manages a complex data type as a string, and en/decode the complex type in your component accessor methods.
Delimited string to Map setting example
@Override\nprivate Map<String, String> map;\n\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>(1);\n\n// expose a \"mapping\" proxy setting for the map field\nresults.add(new BasicTextFieldSettingSpecifier(\"mapping\", null));\n\nreturn results;\n}\n\npublic void setMapping(String mapping) {\nthis.map = StringUtils.commaDelimitedStringToMap(mapping);\n}\n
The net.solarnetwork.node.settings.SettingResourceHandler API defines a way for a component to import and export files uploaded to SolarNode from external sources.
A component could support importing a file using the File setting. This could be used, to provide a way of configuring the component from a configuration file, like CSV, JSON, XML, and so on. Similarly a component could support exporting a file, to generate a configuration file in another format like CSV, JSON, XML, and so on, from its current settings. For example, the Modbus Device Datum Source does exactly these things: importing and exporting a custom CSV file to make configuring the component easier.
The main part of the SettingResourceHandler API for importing files looks like this:
public interface SettingResourceHandler {\n\n/**\n * Get a unique, application-wide setting ID.\n *\n * <p>\n * This ID must be unique across all setting resource handlers registered\n * within the system. Generally the implementation will also be a\n * {@link net.solarnetwork.settings.SettingSpecifierProvider} for the same\n * ID.\n * </p>\n *\n * @return unique ID\n */\nString getSettingUid();\n\n/**\n * Apply settings for a specific key from a resource.\n *\n * @param settingKey\n * the setting key, generally a\n * {@link net.solarnetwork.settings.KeyedSettingSpecifier#getKey()}\n * value\n * @param resources\n * the resources with the settings to apply\n * @return any setting values that should be persisted as a result of\n * applying the given resources (never {@literal null}\n * @throws IOException\n * if any IO error occurs\n */\nSettingsUpdates applySettingResources(String settingKey, Iterable<Resource> resources)\nthrows IOException;\n
The getSettingUid() method overlaps with the Settings Provider API, and as the comments note it is typical for a Settings Provider that publishes settings like File or Text Area to also implement SettingResourceHandler.
The settingKey passed to the applySettingResources() method identifies the resource(s) being uploaded, as a single Setting Resource Handler might support multiple resources. For example a Settings Provider might publish multiple File settings, or File and Text Area settings. The settingKey is used to differentiate between each one.
Imagine a component that publishes a File setting. A typical implementation of that component would look like this (this example omits some methods for brevity):
public class MyComponent implements SettingSpecifierProvider,\nSettingResourceHandler {\n\nprivate static final Logger log\n= LoggerFactory.getLogger(MyComponent.class);\n\n/** The resource key to identify the File setting resource. */\npublic static final String RESOURCE_KEY_DOCUMENT = \"document\";\n\n@Override\npublic String getSettingUid() {\nreturn \"com.example.mycomponent\";\n}\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>();\n\n// publish a File setting tied to the RESOURCE_KEY_DOCUMENT key,\n// allowing only text files to be accepted\nresults.add(new BasicFileSettingSpecifier(RESOURCE_KEY_DOCUMENT, null,\nnew LinkedHashSet<>(asList(\".txt\", \"text/*\")), false));\n\nreturn results;\n}\n\n@Override\npublic SettingsUpdates applySettingResources(String settingKey,\nIterable<Resource> resources) throws IOException {\nif ( resources == null ) {\nreturn null;\n}\nif ( RESOURCE_KEY_DOCUMENT.equals(settingKey) ) {\nfor ( Resource r : resources ) {\n// here we would do something useful with the resource... like\n// read into a string and log it\nString s = FileCopyUtils.copyToString(new InputStreamReader(\nr.getInputStream(), StandardCharsets.UTF_8));\n\nlog.info(\"Got {} resource content: {}\", settingKey, s);\n\nbreak; // only accept one file\n}\n}\nreturn null;\n}\n\n}\n
The part of the Setting Resource Handler API that supports exporting setting resources looks like this:
/**\n * Get a list of supported setting keys for the\n * {@link #currentSettingResources(String)} method.\n *\n * @return the set of supported keys\n */\ndefault Collection<String> supportedCurrentResourceSettingKeys() {\nreturn Collections.emptyList();\n}\n\n/**\n * Get the current setting resources for a specific key.\n *\n * @param settingKey\n * the setting key, generally a\n * {@link net.solarnetwork.settings.KeyedSettingSpecifier#getKey()}\n * value\n * @return the resources, never {@literal null}\n */\nIterable<Resource> currentSettingResources(String settingKey);\n
The supportedCurrentResourceSettingKeys() method returns a set of resource keys the component supports for exporting. The currentSettingResources() method returns the resources to export for a given key.
The SolarNode GUI shows a form menu with all the available resources for all components that support the SettingResourceHandler API, and lets the user to download them:
The net.solarnetwork.settings.SettingSpecifier API defines metadata for a single configurable property in the Settings API. The API looks like this:
public interface SettingSpecifier {\n\n/**\n * A unique identifier for the type of setting specifier this represents.\n *\n * <p>\n * Generally this will be a fully-qualified interface name.\n * </p>\n *\n * @return the type\n */\nString getType();\n\n/**\n * Localizable text to display with the setting's content.\n *\n * @return the title\n */\nString getTitle();\n\n}\n
This interface is very simple, and extended by more specialized interfaces that form more useful setting types.
Note
A SettingSpecifier instance is often referred to simply as a setting.
Here is a view of the class hierarchy that builds off of this interface:
Note
The SettingSpecifier API defines metadata about a configurable property, but not methods to view or change that property's value. The Settings Service provides methods for managing setting values.
The TextFieldSettingSpecifier defines a simple string-based configurable property and is the most common setting type. The setting defines a key that maps to a setter method on its associated component class. In the SolarNode GUI a text field is rendered as an HTML form text input, like this:
The net.solarnetwork.settings.support.BasicTextFieldSettingSpecifier class provides the standard implementation of this API. A standard text field setting is created like this:
new BasicTextFieldSettingSpecifier(\"myProperty\", \"DEFAULT_VALUE\");\n\n// or without any default value\nnew BasicTextFieldSettingSpecifier(\"myProperty\", null);\n
Tip
Setting values are generally treated as strings within the Settings API, however other basic data types such as integers and numbers can be used as well. You can also publish a \"proxy\" setting that manages a complex data type as a string, and en/decode the complex type in your component accessor methods.
For example a Map<String, String> setting could be published as a text field setting that en/decodes the Map into a delimited string value, for example name=Test, color=red.
"},{"location":"developers/settings/specifier/#secure-text-field","title":"Secure Text Field","text":"
The BasicTextFieldSettingSpecifier can also be used for \"secure\" text fields where the field's content is obscured from view. In the SolarNode GUI a secure text field is rendered as an HTML password form input like this:
A standard secure text field setting is created by passing a third true argument, like this:
new BasicTextFieldSettingSpecifier(\"myProperty\", \"DEFAULT_VALUE\", true);\n\n// or without any default value\nnew BasicTextFieldSettingSpecifier(\"myProperty\", null, true);\n
The TitleSettingSpecifier defines a simple read-only string-based configurable property. The setting defines a key that maps to a setter method on its associated component class. In the SolarNode GUI the default value is rendered as plain text, like this:
The net.solarnetwork.settings.support.BasicTitleSettingSpecifier class provides the standard implementation of this API. A standard title setting is created like this:
new BasicTitleSettingSpecifier(\"status\", \"Status is good.\", true);\n
The TitleSettingSpecifier supports HTML markup. In the SolarNode GUI the default value is rendered directly into HTML, like this:
// pass `true` as the 4th argument to enable HTML markup in the status value\nnew BasicTitleSettingSpecifier(\"status\", \"Status is <b>good</b>.\", true, true);\n
The TextAreaSettingSpecifier defines a simple string-based configurable property for a larger text value, loaded as an external file using the SettingResourceHandler API. In the SolarNode GUI a text area is rendered as an HTML form text area with an associated button to upload the content, like this:
The net.solarnetwork.settings.support.BasicTextAreaSettingSpecifier class provides the standard implementation of this API. A standard text field setting is created like this:
new BasicTextAreaSettingSpecifier(\"myProperty\", \"DEFAULT_VALUE\");\n\n// or without any default value\nnew BasicTextAreaSettingSpecifier(\"myProperty\", null);\n
"},{"location":"developers/settings/specifier/#direct-text-area","title":"Direct Text Area","text":"
The BasicTextAreaSettingSpecifier can also be used for \"direct\" text areas where the field's content is not uploaded as an external file. In the SolarNode GUI a direct text area is rendered as an HTML form text area, like this:
A standard direct text area setting is created by passing a third true argument, like this:
new BasicTextAreaSettingSpecifier(\"myProperty\", \"DEFAULT_VALUE\", true);\n\n// or without any default value\nnew BasicTextAreaSettingSpecifier(\"myProperty\", null, true);\n
The ToggleSettingSpecifier defines a boolean configurable property. In the SolarNode GUI a toggle setting is rendered as an HTML form button, like this:
The net.solarnetwork.settings.support.BasicToggleSettingSpecifier class provides the standard implementation of this API. A standard toggle setting is created like this:
The SliderSettingSpecifier defines a number-based configuration property with minimum and maximum values enforced, and a step limit. In the SolarNode GUI a slider is rendered as an HTML widget, like this:
The net.solarnetwork.settings.support.BasicSliderSettingSpecifier class provides the standard implementation of this API. A standard Slider setting is created like this:
// no default value, range between 0-11 in 0.5 increments\nnew BasicSliderSettingSpecifier(\"volume\", null, 0.0, 11.0, 0.5);\n\n// default value 5.0, range between 0-11 in 0.5 increments\nnew BasicSliderSettingSpecifier(\"volume\", 5.0, 0.0, 11.0, 0.5);\n
The RadioGroupSettingSpecifier defines a configurable property that accepts a single value from a fixed set of possible values. In the SolarNode GUI a radio group is rendered as a set of HTML radio input form fields, like this:
The net.solarnetwork.settings.support.BasicRadioGroupSettingSpecifier class provides the standard implementation of this API. A standard RadioGroup setting is created like this:
String[] vals = new String[] {\"a\", \"b\", \"c\"};\nString[] labels = new Strign[] {\"One\", \"Two\", \"Three\"};\nMap<String, String> radioValues = new LinkedHashMap<>(3);\nfor ( int i = 0; i < vals.length; i++ ) {\nradioValues.put(vals[i], labels[i]);\n}\nBasicRadioGroupSettingSpecifier radio =\nnew BasicRadioGroupSettingSpecifier(\"option\", vals[0]);\nradio.setValueTitles(radioValues);\n
The MultiValueSettingSpecifier defines a configurable property that accepts a single value from a fixed set of possible values. In the SolarNode GUI a multi-value setting is rendered as an HTML select form field, like this:
The net.solarnetwork.settings.support.BasicMultiValueSettingSpecifier class provides the standard implementation of this API. A standard MultiValue setting is created like this:
String[] vals = new String[] {\"a\", \"b\", \"c\"};\nString[] labels = new Strign[] {\"Option 1\", \"Option 2\", \"Option 3\"};\nMap<String, String> radioValues = new LinkedHashMap<>(3);\nfor ( int i = 0; i < vals.length; i++ ) {\nradioValues.put(vals[i], labels[i]);\n}\nBasicMultiValueSettingSpecifier menu = new BasicMultiValueSettingSpecifier(\"option\",\nvals[0]);\nmenu.setValueTitles(menuValues);\n
The FileSettingSpecifier defines a file-based resource property, loaded as an external file using the SettingResourceHandler API. In the SolarNode GUI a file setting is rendered as an HTML file input, like this:
The net.solarnetwork.node.settings.support.BasicFileSettingSpecifier class provides the standard implementation of this API. A standard file setting is created like this:
// a single file only, no default content\nnew BasicFileSettingSpecifier(\"document\", null,\nnew LinkedHashSet<>(Arrays.asList(\".txt\", \"text/*\")), false);\n\n// multiple files allowed, no default content\nnew BasicFileSettingSpecifier(\"document-list\", null,\nnew LinkedHashSet<>(Arrays.asList(\".txt\", \"text/*\")), true);\n
A Dynamic List setting allows the user to manage a list of homogeneous items, adding or subtracting items as desired. The items can be literals like strings, or arbitrary objects that define their own settings. In the SolarNode GUI a dynamic list setting is rendered as a pair of HTML buttons to remove and add items, like this:
A Dynamic List is often backed by a Java Collection or array in the associated component. In addition a special size-adjusting accessor method is required, named after the setter method with Count appended. SolarNode will use this accessor to request a specific size for the dynamic list.
Array-backed dynamic list accessorsList-backed dynamic list accessors
The SettingUtils.dynamicListSettingSpecifier() method simplifies the creation of a GroupSettingSpecifier that represents a dynamic list (the examples in the following sections demonstrate this).
A simple Dynamic List is a dynamic list of string or number values.
private String[] names = new String[0];\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>();\n\n// turn a list of strings into a Group of TextField settings\nGroupSettingSpecifier namesList = SettingUtils.dynamicListSettingSpecifier(\n\"names\", asList(names), (String value, int index, String key) ->\nsingletonList(new BasicTextFieldSettingSpecifier(key, null)));\nresults.add(namesList);\n\nreturn results;\n}\n
A complex Dynamic List is a dynamic list of arbitrary object values. The main difference in terms of the necessary settings structure required, compared to a Simple Dynamic List, is that a group-of-groups is used.
Complex data classDynamic List setting
public class Person {\nprivate String firstName;\nprivate String lastName;\n\n// generate list of settings for a Person, nested under some prefix\npublic List<SettingSpecifier> settings(String prefix) {\nList<SettingSpecifier> results = new ArrayList<>(2);\nresults.add(new BasicTextFieldSettingSpecifier(prefix + \"firstName\", null));\nresults.add(new BasicTextFieldSettingSpecifier(prefix + \"lastName\", null));\nreturn results;\n}\n\npublic void setFirstName(String firstName) {\nthis.firstName = firstName;\n}\n\npublic void setLastName(String lastName) {\nthis.lastName = lastName;\n}\n}\n
private Person[] people = new Person[0];\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>();\n\n// turn a list of People into a Group of Group settings\nGroupSettingSpecifier peopleList = SettingUtils.dynamicListSettingSpecifier(\n\"people\", asList(people), (Person value, int index, String key) ->\nsingletonList(new BasicGroupSettingSpecifier(\nvalue.settings(key + \".\"))));\nresults.add(peopleList);\n\nreturn results;\n}\n
Some SolarNode components can be configured from properties files. This type of configuration is meant to be changed just once, when a SolarNode is first deployed, to alter some default configuration value.
Not to be confused with Settings
This type of configuration differs from what the Settings page in the Setup App provides a UI managing. This configuration might be created by system administrators when creating a custom SolarNodeOS image for their needs, while Settings are meant to be managed by end users.
Configuration properties files are read from the /etc/solarnode/services directory and named like NAMESPACE.cfg , where NAMESPACE represents a configuration namespace.
Configuration location
The /etc/solarnode/services location is the default location in SolarNodeOS. It might be another location in other SolarNode deployments.
Imagine a component uses the configuration namespace com.example.service and supports a configurable property named max-threads that accepts an integer value you would like to configure as 4. You would create a com.example.service.cfg file like:
In SolarNetwork a datum is the fundamental time-stamped data structure collected by SolarNodes and stored in SolarNet. It is a collection of properties associated with a specific information source at a specific time.
Example plain language description of a datum
the temperature and humidity collected from my weather station at 1 Jan 2023 11:00 UTC
In this example datum description, we have all the comopnents of a datum:
Datum component Description node the (implied) node that collected the data properties temperature and humidity source my weather station time 1 Jan 2023 11:00 UTC
A datum stream is the collection of datum from a single node for a single source over time.
A datum object is modeled as a flexible structure with the following core elements:
Element Type Description nodeId number A unique ID assigned to nodes by SolarNetwork. sourceId string A node-unique identifier that defines a single stream of data from a specific source, up to 64 characters long. Certain characters are not allowed, see below. created date A time stamp of when the datum was collected, or the date the datum is associated with. samples datum samples The collected properties.
A datum is uniquely identified by the three combined properties (nodeId, sourceId, created).
Source IDs are user-defined strings used to distinguish between different information sources within a single node. For example, a node might collect data from an energy meter on source ID Meter and a solar inverter on Solar. SolarNetwork does not place any restrictions on source ID values, other than a 64-character limit. However, there is are some conventions used within SolarNetwork that are useful to follow, especially for larger deployment of nodes with many source IDs:
Keep IDs as short as, for example Meter1 is better than Schneider ION6200 Meter - Main Building.
Use a path-like structure to encode a logical hierarchy, in least specific to most specific order. For example /S1/B1/M1 could imply the first meter in the first building on the first site.
The + and # characters should not be used. This is actually a constraint in the MQTT protocol used in parts of SolarNetwork, where the MQTT topic name includes the source ID. These characters are MQTT topic filter wildcards, and cannot be used in topic names.
Avoid using wildcard special characters.
The path-like structure becomes useful in places where wildcard patterns are used, like security policies or datum queries. It is generally worthwhile spending some time planning on a source ID taxonomy to use when starting a new project with SolarNetwork.
The properties included in a datum object are known as datum samples. The samples are modeled as a collection of named properties, for example the temperature and humidity properties in the earlier example datum could be represented like this:
Example representation of datum samples from a weather station source
The datum samples are actually further organized into three classifications:
Classification Key Description instantaneous i a single reading, observation, or measurement that does not accumulate over time accumulating a a reading that accumulates over time, like a meter or odometer status s non-numeric data, like staus codes or error messages
These classifications help SolarNetwork understand how to aggregate the datum samples over time. When SolarNode uploads a datum to SolarNetwork, the sample will include the classification of each property. The previous example would thus more accurately be represented like this:
Example representation of datum samples with classifications
watts is an instantaneous measurement of power that does not accumulate
wattHours is an accumulating measurement of the accrual of energy over time
mode is a status message that is not a number
Note
Sometimes these classifications will be hidden from you. For example SolarNetwork hides them when returning datum data from some SolarNetwork API methods. You might come across them in some SolarNode plugins that allow configuring dynamic sample properties to collect, when SolarNode does not implicitly know which classification to use. Some SolarNetwork APIs do return or require fully classified sample objects; the documentation for those services will make that clear.
Many SolarNode components support a general \"expressions\" framework that can be used to calculate values using a scripting language. SolarNode comes with the Spel scripting language by default, so this guide describes that language.
A common use case for expressions is to derive datum property values out of the raw property values captured from a device. In the SolarNode Setup App a typical datum data source component might present a configurable list of expression settings like this:
In this example, each time the data source captures a datum from the device it is communicating with it will add a new watts property by multiplying the captured amps and volts property values. In essence the expression is like this code:
Many SolarNode expressions are evaluated in the context of a datum, typically one captured from a device SolarNode is collecting data from. In this context, the expression supports accessing datum properties directly as expression variables, and some helpful functions are provided.
All datum properties with simple names can be referred to directly as variables. Here simple just means a name that is also a legal variable name. The property classifications do not matter in this context: the expression will look for properties in all classifications.
A datum expression will also provide the following variables:
Property Type Description datumDatum A Datum object, in case you need direct access to the functions provided there. metaDatumMetadataOperations Get datum metadata for the current source ID. parametersMap<String,Object> Simple map-based access to all parameters passed to the expression. The available parameters depend on the context of the expression evaluation, but often include things like placeholder values or parameters generated by previously evaluated expressions. These values are also available directly as variables, this is rarely needed but can be helpful for accessing dynamically-calculated property names or properties with names that are not legal variable names. propsMap<String,Object> Simple map based access to all properties in datum. As datum properties are also available directly as variables, this is rarely needed but can be helpful for accessing dynamically-calculated property names or properties with names that are not legal variable names. sourceIdString The source ID of the current datum."},{"location":"users/expressions/#functions","title":"Functions","text":"
Some functions are provided to help with datum-related expressions.
The following functions help with bitwise integer manipulation operations:
Function Arguments Result Description and(n1,n2)Number, NumberNumber Bitwise and, i.e. (n1 & n2)andNot(n1,n2)Number, NumberNumber Bitwise and-not, i.e. (n1 & ~n2)narrow(n,s)Number, NumberNumber Return n as a reduced-size but equivalent number of a minimum power-of-two byte size snarrow8(n)NumberNumber Return n as a reduced-size but equivalent number narrowed to a minimum of 8-bits narrow16(n)NumberNumber Return n as a reduced-size but equivalent number narrowed to a minimum of 16-bits narrow32(n)NumberNumber Return n as a reduced-size but equivalent number narrowed to a minimum of 32-bits narrow64(n)NumberNumber Return n as a reduced-size but equivalent number narrowed to a minimum of 64-bits not(n)NumberNumber Bitwise not, i.e. (~n)or(n1,n2)Number, NumberNumber Bitwise or, i.e. (n1 | n2)shiftLeft(n,c)Number, NumberNumber Bitwise shift left, i.e. (n << c)shiftRight(n,c)Number, NumberNumber Bitwise shift left, i.e. (n >> c)testBit(n,i)Number, Numberboolean Test if bit i is set in integer n, i.e. ((n & (1 << i)) != 0)xor(n1,n2)Number, NumberNumber Bitwise xor, i.e. (n1 ^ n2)
Tip
All number arguments will be converted to BigInteger values for the bitwise operations, and BigInteger values are returned.
The following functions deal with datum streams. The latest() and offset() functions give you access to recently-captured datum from any SolarNode source, so you can refer to any datum stream being generated in SolarNode. They return another datum expression root object, which means you have access to all the variables and functions documented on this page with them as well.
Function Arguments Result Description hasLatest(source)Stringboolean Returns true if a datum with source ID source is available via the latest(source) function. hasLatestMatching(pattern)StringCollection<DatumExpressionRoot> Returns true if latestMatching(pattern) returns a non-empty collection. hasLatestOtherMatching(pattern)StringCollection<DatumExpressionRoot> Returns true if latestOthersMatching(pattern) returns a non-empty collection. hasMeta()boolean Returns true if metadata for the current source ID is available. hasMeta(source)Stringboolean Returns true if datumMeta(source) would return a non-null value. hasOffset(offset)intboolean Returns true if a datum is available via the offset(offset) function. hasOffset(source,offset)String, intboolean Returns true if a datum with source ID source is available via the offset(source,int) function. latest(source)StringDatumExpressionRoot Provides access to the latest available datum matching the given source ID, or null if not available. This is a shortcut for calling offset(source,0). latestMatching(pattern)StringCollection<DatumExpressionRoot> Return a collection of the latest available datum matching a given source ID wildcard pattern. latestOthersMatching(pattern)StringCollection<DatumExpressionRoot> Return a collection of the latest available datum matching a given source ID wildcard pattern, excluding the current datum if its source ID happens to match the pattern. meta(source)StringDatumMetadataOperations Get datum metadata for a specific source ID. metaMatching(pattern)StringCollection<DatumMetadataOperations> Find datum metadata for sources matching a given source ID wildcard pattern. offset(offset)intDatumExpressionRoot Provides access to a datum from the same stream as the current datum, offset by offset in time, or null if not available. Offset 1 means the datum just before this datum, and so on. offset(source,offset)String, intDatumExpressionRoot Provides access to an offset from the latest available datum matching the given source ID, or null if not available. Offset 0 represents the \"latest\" datum, 1 the one before that, and so on. SolarNode only maintains a limited history for each source, do do not rely on more than a few datum to be available via this method. This history is also cleared when SolarNode restarts. selfAndLatestMatching(pattern)StringCollection<DatumExpressionRoot> Return a collection of the latest available datum matching a given source ID wildcard pattern, including the current datum.\u00a0The current datum will always be the first datum returned."},{"location":"users/expressions/#math-functions","title":"Math functions","text":"
Expressions support basic math operators like + for addition and * for multiplication. The following functions help with other math operations:
Function Arguments Result Description avg(collection)Collection<Number>Number Calculate the average (mean) of a collection of numbers. Useful when combined with the group(pattern) function. ceil(n)NumberNumber Round a number larger, to the nearest integer. ceil(n,significance)Number, NumberNumber Round a number larger, to the nearest integer multiple of significance. down(n)NumberNumber Round numbers towards zero, to the nearest integer. down(n,significance)Number, NumberNumber Round numbers towards zero, to the nearest integer multiple of significance. floor(n)NumberNumber Round a number smaller, to the nearest integer. floor(n,significance)Number, NumberNumber Round a number smaller, to the nearest integer multiple of significance. max(collection)Collection<Number>Number Return the largest value from a set of numbers. max(n1,n2)Number, NumberNumber Return the larger of two numbers. min(collection)Collection<Number>Number Return the smallest value from a set of numbers. min(n1,n2)Number, NumberNumber Return the smaler of two numbers. mround(n,significance)Number, NumberNumber Round a number to the nearest integer multiple of significance. round(n)NumberNumber Round a number to the nearest integer. round(n,digits)Number, NumberNumber Round a number to the nearest number with digits decimal digits. roundDown(n,digits)Number, NumberNumber Round a number towards zero to the nearest number with digits decimal digits. roundUp(n,digits)Number, NumberNumber Round a number away from zero to the nearest number with digits decimal digits. sum(collection)Collection<Number>Number Calculate the sum of a collection of numbers. Useful when combined with the group(pattern) function. up(n)NumberNumber Round numbers away from zero, to the nearest integer. up(n,significance)Number, NumberNumber Round numbers away from zero, to the nearest integer multiple of significance."},{"location":"users/expressions/#node-metadata-functions","title":"Node metadata functions","text":"
All the Datum Metadata functions like metadataAtPath(path) can be invoked directly, operating on the node's own metadata instead of a datum stream's metadata.
The following functions deal with general SolarNode operations:
Function Arguments Result Description isOpMode(mode)Stringboolean Returns true if the mode operational mode is active."},{"location":"users/expressions/#property-functions","title":"Property functions","text":"
The following functions help with expression properties (variables):
Function Arguments Result Description has(name)Stringboolean Returns true if a property named name is defined. Can be used to prevent expression errors on datum property variables that are missing. group(pattern)StringCollection<Number> Creates a collection out of numbered properties whose name matches the given regular expression pattern."},{"location":"users/expressions/#expression-examples","title":"Expression examples","text":"
Let's assume a captured datum like this, expressed as JSON:
Then here are some example Spel expressions and the results they would produce:
Expression Result Comment stateOk Returns the state status property directly, which is Ok. datum.s['state']Ok Returns the state status property explicitly. props['state']Ok Same result as datum.s['state'] but using the short-cut props accessor. amps * volts1008.0 Returns the result of multiplying the amps and volts properties together: 4.2 \u00d7 240.0 = 1008.0."},{"location":"users/expressions/#datum-stream-history","title":"Datum stream history","text":"
Building on the previous example datum, let's assume an earlier datum for the same source ID had been collected with these properties (the classifications have been omitted for brevity):
Then here are some example expressions and the results they would produce given the original datum example:
Expression Result Comment hasOffset(1)true Returns true because of the earlier datum that is available. hasOffset(2)false Returns false because only one earlier datum is available. amps - offset(1).amps1.1 Computes the difference between the current and previous amps properties, which is 4.2 - 3.1 = 1.1."},{"location":"users/expressions/#other-datum-stream-history","title":"Other datum stream history","text":"
Other datum stream histories collected by SolarNode can also be accessed via the offset(source,offset) function. Let's assume SolarNode is collecting a datum stream for the source ID solar, and had amassed the following history, in newest-to-oldest order:
Then here are some example expressions and the results they would produce given the original datum example:
Expression Result Comment hasLatest('solar')true Returns true because of a datum for source solar is available. hasOffset('solar',2)false Returns false because only one earlier datum from the latest with source solar is available. (amps * volts) - (latest('solar').amps * latest('solar').volts)432.0 Computes the difference in energy between the latest solar datum and the current datum, which is (6.0 \u00d7 240.0) - (4.2 \u00d7 240.0) = 432.0.
If we add another datum stream for the source ID solar1 like this:
[\n{\"amps\" : 1.0, \"volts\" : 240.0 }\n]\n
If we also add another datum stream for the source ID solar2 like this:
[\n{\"amps\" : 3.0, \"volts\" : 240.0 }\n]\n
Then here are some example expressions and the results they would produce given the previous datum examples:
Expression Result Comment sum(latestMatching('solar*').?[amps>1].![amps * volts])2160 Returns the sum power of the latest solar and solar2 datum. The solar1 power is omitted because its amps property is not greater than 1, so we end up with (6 * 240) + (3 * 240) = 2160."},{"location":"users/expressions/#datum-metadata","title":"Datum metadata","text":"
Some functions return DatumMetadataOperations objects. These objects provide metadata for things like a specific source ID on SolarNode.
The properties available on datum metadata objects are:
Property Type Description emptyboolean Is true if the metadata does not contain any values. infoMap<String,Object> Simple map based access to the general metadata (e.g. the keys of the m metadata map). infoKeysSet<String> The set of general metadata keys available (e.g. the keys of the m metadata map). propertyInfoKeysSet<String> The set of property metadata keys available (e.g. the keys of the pm metadata map). tagsSet<String> A set of tags associated with the metadata."},{"location":"users/expressions/#datum-metadata-general-info-functions","title":"Datum metadata general info functions","text":"
The following functions available on datum metadata objects support access to the general metadata (e.g. the m metadata map):
Function Arguments Result Description getInfo(key)StringObject Get the general metadata value for a specific key. getInfoNumber(key)StringNumber Get a general metadata value for a specific key as a Number. Other more specific number value functions are also available such as getInfoInteger(key) or getInfoBigDecimal(key). getInfoString(key)StringString Get a general metadata value for a specific key as a String. hasInfo(key)Stringboolean Returns true if a non-null general metadata value exists for the given key."},{"location":"users/expressions/#datum-metadata-property-info-functions","title":"Datum metadata property info functions","text":"
The following functions available on datum metadata objects support access to the property metadata (e.g. the pm metadata map):
Function Arguments Result Description getPropertyInfo(prop)StringMap<String,Object> Get the property metadata for a specific property. getInfoNumber(prop,key)String, StringNumber Get a property metadata value for a specific property and key as a Number. Other more specific number value functions are also available such as getInfoInteger(prop,key) or getInfoBigDecimal(prop,key). getInfoString(prop,key)String, StringString Get a property metadata value for a specific property and key as a String. hasInfo(prop,key)String, StringString Returns true if a non-null property metadata value exists for the given property and key."},{"location":"users/expressions/#datum-metadata-global-functions","title":"Datum metadata global functions","text":"
The following functions available on datum metadata objects support access to both general and property metadata:
Function Arguments Result Description differsFrom(metadata)DatumMetadataOperationsboolean Returns true if the given metadata has any different values than the receiver. hasTag(tag)Stringboolean Returns true if the given tag is available. metadataAtPath(path)StringObject Get the metadata value at a metadata key path. hasMetadataAtPath(path)Stringboolean Returns true if metadataAtPath(path) would return a non-null value."},{"location":"users/getting-started/","title":"Getting Started","text":"
This section describes how to get SolarNode running on a device. You will need to configure your device as a SolarNode and associate your SolarNode with SolarNetwork.
Tip
You might find it helpful to read through this entire guide before jumping in. There are screen shots and tips provided to help you along the way.
"},{"location":"users/getting-started/#get-your-device-ready-to-use","title":"Get your device ready to use","text":"
SolarNode can run on a variety of devices. To get started using SolarNode, you must download the appropriate SolarNodeOS image for your device. SolarNodeOS is a complete operating system tailor made for SolarNode. Choose the SolarNodeOS image for the device you want to run SolarNode on and then copy that image to your device media (typically an SD card).
"},{"location":"users/getting-started/#choose-your-device","title":"Choose your device","text":"Raspberry PiOrange PiSomething Else
The Raspberry Pi is the best supported option for general SolarNode deployments. Models 3 or later, Compute Module 3 or later, and Zero 2 W or later are supported. Use a tool like Etcher or Raspberry Pi Imager to copy the image to an SD card (minimum size is 2 GB, 4 GB recommended).
Download SolarNodeOS for Raspberry Pi
The Orange Pi models Zero and Zero Plus are supported. Use a tool like Etcher to copy the image to an SD card (minimum size is 1 GB, 4 GB recommended).
Download SolarNodeOS for Orange Pi
Looking for SolarNodeOS for a device not listed here? Reach out to us through email or Slack to see if we can help!
"},{"location":"users/getting-started/#configure-your-network","title":"Configure your network","text":"
SolarNode needs a network connection. If your device has an ethernet port, that is the most reliable way to get started: just plug in your ethernet cable and off you go!
If you want to use WiFi, or would like more detailed information about SolarNode's networking options, see the Networking sections.
"},{"location":"users/getting-started/#power-it-on","title":"Power it on","text":"
Insert your SD card (or other device media) into your device, and power it on. While it starts up, proceed with the next steps.
"},{"location":"users/getting-started/#associate-your-solarnode-with-solarnetwork","title":"Associate your SolarNode with SolarNetwork","text":"
Every SolarNode must be associated (registered) with a SolarNetwork account. To associate a SolarNode, you must:
Log into SolarNetwork
Generate an invitation for a new SolarNode
Accept the invitation on SolarNode
"},{"location":"users/getting-started/#log-into-solarnetwork","title":"Log into SolarNetwork","text":"
If you do not already have a SolarNetwork account, register for one and then log in.
"},{"location":"users/getting-started/#generate-a-solarnode-invitation","title":"Generate a SolarNode invitation","text":"
Click on the My Nodes link. You will see an Invite New SolarNode button, like this:
Click the Invite New SolarNode button, then fill in and submit the form that appears and select your time zone by clicking on the world map:
The generated SolarNode invitation will appear next.
Select and copy the entire invitation. You will need to paste that into the SolarNode setup screen in the next section.
"},{"location":"users/getting-started/#accept-the-invitation-on-solarnode","title":"Accept the invitation on SolarNode","text":"
Open the SolarNode Setup app in your browser. The URL to use might be http://solarnode/ or it might be an IP address like http://192.168.1.123. See the Networking section for more information. You will be greeted with an invitation acceptance form into which you can paste the invitation you generated in SolarNetwork. The acceptance process goes through the following steps:
Submit the invitation in the acceptance form
Preview the invitation details
Confirm the invitation
Acceptance formPreviewConfirmComplete
First you submit the invitation in the acceptance form.
Next you preview the invitation details.
Note
The expected SolarNetwork Service value shown in this step will be in.solarnetwork.net.
Finally, confirm the invitation. This step contacts SolarNetwork and completes the association process.
Warning
Ensure you provide a Certificate Password on this step, so SolarNetwork can generate a security certificate for your SolarNode.
When these steps are completed, SolarNetwork will have assigned your SolarNode a unique identifier known as your Node ID. A randomly generated SolarNode login password will have been generated; you are given the opportunity to easily change that if you prefer.
Logging in SolarNode is configured in the /etc/solarnode/log4j2.xml file, which is in the log4j configuration format. The default configuration in SolarNodeOS sets the overall verbosity to INFO and logs to a temporary storage area /run/solarnode/log/solarnode.log.
Log messages have the following general properties:
Component Example Description Timestamp 2022-03-15 09:05:37,029 The date/time the message was generated. Note the format of the timestamp depends on the logging configuration; the SolarNode default is shown in this example. Level INFO The severity/verbosity of the message (as determined by the developer). This is an enumeration, and from least-to-most severe: TRACE, DEBUG, INFO, WARN, ERROR. The level of a given logger allows messages with that level or higher to be logged, while lower levels are skipped. The default SolarNode configuration sets the overal level to INFO, so TRACE and DEBUG messages are not logged. Logger ModbusDatumDataSource A category or namespace associated with the message. Most commonly these equate to Java class names, but can be any value and is determined by the developer. Periods in the logger name act as a delimiter, forming a hierarchy that can be tuned to log at different levels. For example, given the default INFO level, configuring the net.solarnetwork.node.io.modbus logger to DEBUG would turn on debug-level logging for all loggers in the Modbus IO namespace. Note that the default SolarNode configuration logs just a fixed number of the last characters of the logger name. This can be changed in the configuration to log more (or all) of the name, as desired. Message Error reading from device. The message itself, determined by the developer. Exception Some messages include an exception stack trace, which shows the runtime call tree where the exception occurred."},{"location":"users/logging/#logger-namespaces","title":"Logger namespaces","text":"
The Logger component outlined in the previous section allows a lot of flexibility to configure what gets logged in SolarNode. Setting the level on a given namespace impacts that namespace as well as all namespaces beneath it, meaning all other loggers that share the same namespace prefix.
For example, imagine the following two loggers exist in SolarNode:
Given the default configuration sets the default level to INFO, we can turn in DEBUG logging for both of these by adding a <Logger> line like the following within the <Loggers> element:
That turns on DEBUG for both loggers because they are both children of the net.solarnetwork.node.io.modbus namespace. We could turn on TRACE logging for one of them like this:
That would also turn on TRACE for any other loggers in the net.solarnetwork.node.io.modbus.serial namespace. You can limit the configuration all the way down to a full logger name if you like, for example:
The SolarNode UI supports configuring logger levels dynamically, without having to change the logging configuration file. See the Setup App / Settings / Logging page for more information.
The default SolarNode configuration automatically rotates log files based on size, and limits the number of historic log files kept around, to that its associated storage space is not filled up. When a log file reaches the file limit, it is renamed to include a -i.log suffix, where i is an offset from the current log. The default configuration sets the maximum log size to 1 MB and limits the number of historic files to 3.
You can also adjust how much history is saved by tweaking the <SizeBasedTriggeringPolicy> and <DefaultRolloverStrategy> configuration. For example to change to a limit of 9 historic files of at most 5 MB each, the configuration would look like this:
By default SolarNode logs to temporary (RAM) storage that is discarded when the node reboots. The configuration can be changed so that logs are written directly to persistent storage if you would like to have the logs persisted across reboots, or would like to preserve more log history than can be stored in the temporary storage area.
To make this change, update the <RollingFile> element's fileName and/or filePattern attributes to point to a persistent filesystem. SolarNode already has write permission to the /var/lib/solarnode/var directory, so an easy location to use is /var/lib/solarnode/var/log, like this:
This configuration can add a lot of stress to the node's storage medium, and may shorten its useful life. Consumer-grade SD cards in particular can fail quickly if SolarNode is writting a lot of information, such as verbose logging. Use of this configuration should be used with caution.
"},{"location":"users/logging/#logging-example-split-across-multiple-files","title":"Logging example: split across multiple files","text":"
Sometimes it can be useful to turn on verbose logging for some area of SolarNode, but have those messages go to a different file so they don't clog up the main solarnode.log file. This can be done by configuring additional appender configurations.
The following example logging configuration creates the following log files:
/var/log/solarnode/solarnode.log - the main log
/var/log/solarnode/filter.log - filter logging
/var/log/solarnode/mqtt-solarin.log - MQTT wire logging to SolarIn
/var/log/solarnode/mqtt-solarflux.log - MQTT wire logging to SolarFlux
First you must create the /var/log/solarnode directory and give SolarNode permission to write there:
sudo mkdir /var/log/solarnode\nsudo chgrp solar /var/log/solarnode\nsudo chmod g+w /var/log/solarnode\n
Then edit the /etc/solarnode/log4j2.xml file to hold the following (adjust according to your needs):
The File appender is the \"main\" application log where most logs should go.
The Filter appender is where we want net.solarnetwork.node.datum.filter messages to go.
The MQTT appender is where we want net.solarnetwork.mqtt.queue messages to go.
The Flux appender is where we want net.solarnetwork.mqtt.influx messages to go.
Here we include additivity=\"false\" and add the <AppenderRef> element that refereneces the specific appender name we want the log messages to go to. The additivity=false attribute means the log messages will only go to the Filter appender, instead of also going to the root-level File appender.
The root-level appender is the \"default\" destination for log messages, unless overridden by a specific appender like we did for the Filter, MQTT, and Flux appenders above.
The various <AppenderRef> elements configure the appender name to write the messages to.
The various additivity=\"false\" attributes disable appender additivity which means the log message will only be written to one appender, instead of being written to all configured appenders in the hierarchy (for example the root-level appender).
The immediateFlush=\"false\" turns on buffered logging, which means log messages are buffered in RAM before being flushed to disk. This is more forgiving to the disk, at the expense of a delay before the messages appear.
MQTT wire logging means the raw MQTT packets send and received over MQTT connections will be logged in an easy-to-read but very verbose format. For the MQTT wire logging to be enabled, it must be activated with a special configuration file. Create the /etc/solarnode/services/net.solarnetwork.common.mqtt.netty.cfg file with this content:
MQTT wire logs use a namespace prefix net.solarnetwork.mqtt. followed by the connection's host name or IP address and port. For example SolarIn messages would use net.solarnetwork.mqtt.queue.solarnetwork.net:8883 and SolarFlux messages would use net.solarnetwork.mqtt.influx.solarnetwork.net:8884.
SolarNode will attempt to automatically configure networking access from a local DHCP server. For many deployments the local network router is the DHCP server. SolarNode will identify itself with the name solarnode, so in many cases you can reach the SolarNode setup app at http://solarnode/.
To find what network address SolarNode is using, you have a few options:
"},{"location":"users/networking/#consult-your-network-router","title":"Consult your network router","text":"
Your local network router is very likely to have a record of SolarNode's network connection. Log into the router's management UI and look for a device named solarnode.
"},{"location":"users/networking/#connect-a-keyboard-and-screen","title":"Connect a keyboard and screen","text":"
If your SolarNode supports connecting a keyboard and screen, you can log into the SolarNode command line console and run ip -br addr to print out a brief summary of the current networking configuration:
$ ip -br addr\n\nlo UNKNOWN 127.0.0.1/8 ::1/128\neth0 UP 192.168.0.254/24 fe80::e65f:1ff:fed1:893c/64\nwlan0 DOWN\n
In the previous output, SolarNode has an ethernet device eth0 with a network address 192.168.0.254 and a WiFi device wlan0 that is not connected. You could reach that SolarNode at http://192.168.0.254/.
Tip
You can get more details by running ip addr (without the -br argument).
If your device will use WiFi for network access, you will need to configure the network name and credentials to use. You can do that by creating a wpa_supplicant.conf file on the SolarNodeOS media (typically an SD card). For Raspberry Pi media, you can mount the SD card on your computer and it will mount the appropriate drive for you.
Once mounted use your favorite text editor to create a wpa_supplicant.conf file with content like this:
country=nz\nnetwork={\n ssid=\"wifi network name here\"\n psk=\"wifi password here\"\n}\n
Change the country=nz to match your own country code.
SolarNode supports a concept called operational modes. Modes are simple names like quiet and hyper that can be either active or inactive. Any number of modes can be active at a given time. In theory both quiet and hyper could be active simultaneously. Modes can be named anything you like.
Modes can be used by SolarNode components to alter their behavior dynamically. For example a data source component might stop collecting data from a set of data sources if the quiet mode is active, or start collecting data at an increased frequency if hyper is active. Some components might require specific names, which are described in their documentation. Components that allow configuring a required operational mode setting can also invert the requirement by adding a ! prefix to the mode name, for example !hyper can be thought of as \"when hyper is not active\". You can also specify exactly ! to match only when no mode is active.
Datum Filters also make use of operational modes, to toggle filters on and off dynamically.
Operational modes can be activated with an associated expiration date. The mode will remain active until the expiration date, at which time it will be automatically deactivated. A mode can always be manually deactivated before its associated expiration date.
The SolarUser Instruction API can be used to toggle operational modes on and off. The EnableOperationalModes instruction activates modes and DisableOperationalModes deactivates them.
SolarNode supports placeholders in some setting values, such as datum data source IDs. These allow you to define a set of parameters that can be consistently applied to many settings.
For example, imagine you manage many SolarNode devices across different buildings or sites. You'd like to follow a naming convention for your datum data source ID values that include a code for the building the node is deployed in, along the lines of /BUILDING/DEVICE. You could define a placeholder building and then configure the source IDs like /{building}/device. On each node you'd define the building placeholder with a building-specific value, so at runtime the nodes would resolve actual source ID values with those names replacing the {building} placeholder, for example /OFFICE1/meter.
Placeholders are written using the form {name:default} where name is the placeholder name and default is an optional default value to apply if no placeholder value exists for the given name. If a default value is not needed, omit the colon so the placeholder becomes just {name}.
For example, imagine a set of placeholder values like
Name Value building OFFICE1 room BREAK
Here are some example settings with placeholders with what they would resolve to:
Input Resolved value /{building}/meter/OFFICE1/meter/{building}/{room}/temp/OFFICE1/BREAK/temp/{building}/{floor:1}/{room}/OFFICE1/1/BREAK"},{"location":"users/placeholders/#static-placeholder-configuration","title":"Static placeholder configuration","text":"
SolarNode will look for placeholder values defined in properties files stored in the conf/placeholders.d directory by default. In SolarNodeOS this is the /etc/solarnode/placeholders.d directory.
Warning
These files are only loaded once, when SolarNode starts up. If you make changes to any of them then SolarNode must be restarted.
The properties file names must have a .properties extension and follow Java properties file syntax. Put simply, each file contains lines like
name = value\n
where name is the placeholder name and value is its associated value. The example set of placeholder values shown previously could be defined in a /etc/solarnode/placeholders.d/mynode.properties file with this content:
SolarNode also supports storing placeholder values as Settings using the key placeholder. The SolarUser /instruction/add API can be used with the UpdateSetting topic to modify the placeholder values as needed. The type value is the placeholder name and the value the placeholder value. Placeholders defined this way have priority over any similarly-named placeholders defined statically. Changes take effect as soon as SolarNode receives and processes the instruction.
Warning
Once a placeholder value is set via the UpdateSetting instruction, the same value defined as a static placeholder will be overridden and changes to the static value will be ignored.
For example, to set the floor placeholder to 2 on node 123, you could make a POST request to /solaruser/api/v1/sec/instr/add/UpdateSetting with the following JSON body:
SolarSSH is SolarNetwork's method of connecting to SolarNode devices over the internet even when those devices are not directly reachable due to network firewalls or routing rules. It uses the Secure Shell Protocol (SSH) to ensure your connection is private and secure.
SolarSSH does not maintain permanently open SSH connections to SolarNode devices. Instead the connections are established on demand, when you need them. This allows you to connect to a SolarNode when you need to perform maintenance, but not require SolarNode maintain an open SSH connection to SolarSSH.
In order to use SolarSSH, you will need a User Security Token to use for authentication.
You can use SolarSSH right in your browser to connect to any of your nodes.
The SolarSSH browser app
"},{"location":"users/remote-access/#choose-your-node-id","title":"Choose your node ID","text":"
Click on the node ID in the page title to change what node you want to connect to.
Changing the SolarSSH node ID
Bookmark a SolarSSH page for your node ID
You can append a ?nodeId=X to the SolarSSH browser URL https://go.solarnetwork.net/solarssh/, where X is a node ID, to make the app start with that node ID directly. For example to start with node 123, you could bookmark the URL https://go.solarnetwork.net/solarssh/?nodeId=123.
"},{"location":"users/remote-access/#provide-your-credentials","title":"Provide your credentials","text":"
Fill in User Security Token credentials for authentication. The node ID you are connecting to must be owned by the same account as the security token.
Click the Connect button to initiate the SolarSSH connection process. You will be presented with a dialog form to provide your SolarNodeOS system account credentials. This is only necessary if you want to connect to the SolarNodeOS system command line. If you only need to access the SolarNode Setup App, you can click the Skip button to skip this step. Otherwise, click the Login button to log into the system command line.
SolarNodeOS system account credentials form
SolarSSH will then establish the connection to your node. If you provided SolarNodeOS system account credentials previously and clicked the Login button, you will end up with a system command prompt, like this:
Once connected, you can access the remote node's Setup App by clicking the Setup button in the top-right corner of the window. This will open a new browser tab for the Setup App.
Accessing the SolarNode Setup App through a SolarSSH web connection
SolarSSH also supports a \"direct\" connection mode, that allows you to connect using standard ssh client applications. This is a more advanced (and flexible) way of connecting to your nodes, and even allows you to access other network services on the same network as the node and provides full SSH integration including port forwarding, scp, and sftp support.
Direct SolarSSH connections require using a SSH client that supports the SSH \"jump\" host feature. The \"jump\" server hosted by SolarNetwork Foundation is available at ssh.solarnetwork.net:9022.
The \"jump\" connection user is formed by combining a node ID with a user security token, separated by a : character. The general form of a SolarSSH direct connection \"jump\" host thus looks like this:
NODE:TOKEN@ssh.solarnetwork.net:9022\n
where NODE is a SolarNode ID and TOKEN is a SolarNetwork user security token.
The actual SolarNode user can be any OS user (typically solar) and the hostname can be anything. A good practice for the hostname is to use one derived from the SolarNode ID, e.g. solarnode-123.
Using OpenSSH a complete connection command to log in as a solar user looks like this, passing the \"jump\" host via a -J argument:
SolarNetwork security tokens often contain characters that must be escaped with a \\ character for your shell to interpret them correctly. For example, a token like 9gPa9S;Ux1X3kK)YN6&g might need to have the ;)& characters escaped like 9gPa9S\\;Ux1X3kK\\)YN6\\&g.
You will be first prompted to enter a password, which must be the token secret. You might then be prompted for the SolarNode OS user's password. Here's an example screen shot:
Accessing the SolarNode system command line through a SolarSSH direct connection
If you find yourself using SolarSSH connections frequently, a handy bash or zsh shell function can help make the connection process easier to remember. Here's an example that give you a solarssh command that accepts a SolarNode ID argument, followed by any optional SSH arguments:
function solarssh () {\nlocal node_id=\"$1\"\nif [ -z \"$node_id\" ]; then\necho 'Must provide node ID , e.g. 123'\nelse\nshift\necho \"Enter SN token secret when first prompted for password. Enter node $node_id password second.\"\nssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \\\n-o LogLevel=ERROR -o NumberOfPasswordPrompts=1 \\\n-J \"$node_id\"'SN_TOKEN_HERE@ssh.solarnetwork.net:9022' \\\n$@ solar@solarnode-$node_id\nfi\n}\n
Just replace SN_TOKEN_HERE with a user security token. After integrating this into your shell's configuration (e.g. ~/.bashrc or ~/.zshrc) then you could connect to node 123 like:
PuTTY is a popular tool for Windows that supports SolarSSH connections. To connect to a SolarNode using PuTTY, you must:
Configure a SSH connection proxy to ssh.solarnetwork.net:9022 using a username like NODE_ID:TOKEN_ID and the corresponding token secret as the password.
Optionally configure a tunnel to localhost:8080 to access the SolarNode Setup App
Configure the session to connect to solarnode-NODE_ID on port 22
Open the Connection > Proxy configuration category in PuTTY, and configure the following settings:
Setting Value Proxy type SSH to proxy and use port forwarding Proxy hostname ssh.solarnetwork.net Port 9022 Username The desired node ID, followed by a :, followed by a user security token ID, that is: NODE_ID:TOKEN_ID Password The user security token secret.
To access the SolarNode Setup App, you can configure PuTTY to foward a port on your local machine to localhost:8080 on the node. Once the SSH connection is established, you can open a browser to http://localhost:PORT to access the SolarNode Setup App. You can use any available local port, for example if you used port 8888 then you would open a browser to http://localhost:8888 to access the SolarNode Setup App.
Open the Connection > SSH > Tunnels configuration category in PuTTY, and configure the following settings:
Setting Value Source port A free port on your machine, for example 8888. Destination localhost:8080 Add You must click the Add button to add this tunnel. You can then add other tunnels as needed.
Finally under the Session configuration category in PuTTY, configure the Host Name and Port to connect to SolarNode. You can also provide a session name and click the Save button to save all the settings you have configured, making it easy to load them in the future.
Setting Value Host Name Does not actually matter, but a name like solarnode-NODE_ID is helpful, where NODE_ID is the ID of the node you are connecting to. Port 22
Confiruing PuTTY session settings
"},{"location":"users/remote-access/#putty-open-connection","title":"PuTTY open connection","text":"
On the Session configuration category, click the Open button to establish the SolarSSH connection. You might be prompted to confirm the identity of the ssh.solarnetwork.net server first. Click the Accept button if this is the case.
PuTTY host verification alert
PuTTY will connect to SolarSSH and after a short while prompt you for the SolarNodeOS user you would like to connect to SolarNode with. Typically you would use the solar account, so you would type solar followed by Enter. You will then be prompted for that account's password, so type that in and type Enter again. You will then be presented with the SolarNodeOS shell prompt.
PuTTY node login
Assuming you configured a SSH tunnel on port 8888 to localhost:8080, you can now open http://localhost:8888 to access the SolarNode Setup App.
Once connected to SolarSSH, access the SolarNode Setup App in your browser.
Some SolarNode features require SolarNetwork Security Tokens to use as authentication credentails for SolarNetwork services. Security Tokens are managed on the Security Tokens page in SolarNetwork.
User Security Tokens allow access to web services that perform functions directly on your behalf, for example issue an instruction to your SolarNode.
Click the \"+\" button in the User Tokens section to generate a new security token. You will be shown a form where you can give a name, description, and policy restrictions for the token.
The form for creating a new User Security Token
Click the Generate Security Token button to generate the new token. You will then be shown the generated token. You will need to copy and save the token to a safe and secure place.
A newly generated security token \u2014 make sure to save the token in a safe place
Data Security Tokens allow access to web services that query the data collected by your SolarNodes.
Click the \"+\" button in the Data Tokens section to generate a new security token. You will be shown a form where you can give a name, description, and policy restrictions for the token.
The form for creating a new Data Security Token
Click the Generate Security Token button to generate the new token. You will then be shown the generated token. You will need to copy and save the token to a safe and secure place.
Security tokens can be configured with a Security Policy that restricts the types of functions or data the token has permission to access.
Policy User Node Description API Paths Restrict the token to specific API methods. Expiry Make the token invalid after a specific date. Minimum Aggregation Restrict the data aggregation level allowed. Node IDs Restrict to specific node IDs. Refresh Allowed Make the token invalid after a specific date. Source IDs Restrict to specific datum source IDs. Node Metadata Restrict to specific node metadata. User Metadata Restrict to specific user metadata."},{"location":"users/security-tokens/#api-paths","title":"API Paths","text":"
The API Paths policy restricts the token to specific SolarNet API methods, based on their URL path. If this policy is not included then all API methods are allowed.
The Minimum Aggregation policy restricts the token to a minimum data aggregation level. If this policy is not included, or of the minimum level is set to None, data for any aggregation level is allowed.
The Node IDspolicy restrict the token to specific node IDs. If this policy is not included, then the token has access to all node IDs in your SolarNetwork account.
The Node Metadata policy restricts the token to specific portions of node-level metadata. If this policy is not included then all node metadata is allowed.
The Refresh Allowed policy grants applications given a signing key rather than the token's private password can refresh the key as long as the token has not expired.
The Source IDs policy restrict the token to specific datum source IDs. If this policy is not included, then the token has access to all source IDs in your SolarNetwork account.
The User Metadata policy restricts the token to specific portions of account-level metadata. If this policy is not included then all user metadata is allowed.
SolarNode plugins support configurable properties, called settings. The SolarNode setup app allows you to manage settings through simple web forms.
Settings can also be exported and imported in a CSV format, and can be applied when SolarNode starts up with Auto Settings CSV files. Here is an example of a settings form in the SolarNode setup app:
There are 3 settings represented in that screen shot:
Schedule
Destination
Temporary Destination
Tip
Nearly every form field you can edit in the SolarNode setup app represents a setting for a component in SolarNode.
In the SolarNode setup app the settings can be imported and exported from the main Settings screen in the Settings Backup & Restore section:
Settings files are CSV (comma separated values) files, easily exported from spreadsheet applications like Microsoft Excel or Google Sheets. The CSV must include a header row, which is skipped. All other rows will be processed as settings.
The Settings CSV format uses a quite general format and contains the following columns:
# Name Description 1 key A unique identifier for the service the setting applies to. 2 type A unique identifier for the setting with the service specified by key, typically using standard property syntax. 3 value The setting value. 4 flags An integer bitmask of flags associated with the setting. See the flags section for more info. 5 modified The date the setting was last modified, in yyyy-MM-dd HH:mm:ss format.
To understand the key and type values required for a given component requires consulting the documentation of the plugin that provides that component. You can get a pretty good picture of what the values are by exporting the settings after configuring a component in SolarNode. Typically the key value will mirror a plugin's Java package name, and type follows a JavaScript-like property accessor syntax representing a configurable property on the component.
The type setting value usually defines a component property using a JavaScript-like syntax with these rules:
Expression Example Description Property name a property named name Nested property name.subname a nested property subname on a parent property name List property name[0] the first element of an indexed list property named name Map property name['key'] the key element of the map property name
These rules can be combined into complex expressions, for example propIncludes[0].name or delegate.connectionFactory.propertyFilters['UID'].
Each setting has a set of flags that can be associated with it. The following table outlines the bit offset for each flag along with a description:
# Name Description 0 Ignore modification date If this flag is set then changes to the associated setting will not trigger a new auto backup. 1 Volatile If this flag is set then changes to the associated setting will not trigger an internal \"setting changed\" event to be broadcast.
Note these are bit offsets, so the decimal value to ignore modification date is 1, to mark as volatile is 2, and for both is 3.
Many plugins provide component factories which allow you to configure any number of instances of that component. Each component instance is assigned a unique identifier when it is created. In the SolarNode setup app, the component instance identifiers appear throughout the UI:
In the previous example CSV the Modbus I/O plugin allows you to configure any number of Modbus connection components, each with their own specific settings. That is an example of a component factory. The settings CSV will include a special row to indicate that such a factory component should be activated, using a unique identifier, and then all the settings associated with that factory instance will have that unique identifier appended to its key values.
Going back to that example CSV, this is the row that activates a Modbus I/O component instance with an identifier of 1:
The synax for key column is simply the service identifier followed by .FACTORY. Then the type and value columns are both set the same unique identifier. In this example that identifier is 1. For all settings specific to a factory component, the key column will be the service identifier followed by .IDENTIFIER where IDENTIFIER is the unique instance identifier.
Here is an example that shows two factory instances configured: Lighting and HVAC. Each have a different serialParams.portName setting value configured:
SolarNode settings can also be configured through Auto Settings, applied when SolarNode starts up, by placing Settings CSV files in the /etc/solarnode/auto-settings.d directory. These settings are applied only if they don't already exist or the modified date in the settings file is newer than the date they were previously applied.
SolarFlux is the name of a real-time cloud-based service for datum using a publish/subscribe integration model. SolarNode supports publishing datum to SolarFlux and your own applications can subscribe to receive datum messages as they are published.
SolarFlux is based on MQTT. To integrate with SolarFlux you use a MQTT client application or library. See the SolarFlux Integration Guide for more information.
Each datum message is published as a CBOR encoded map by default, to a MQTT topic based on the datum's source ID. This is essentially a JSON object. The map keys are the datum property names. You can configure a Datum Encoder to encode datum into a different format, by configuring a filter. For example, the Protobuf Datum Encoder supports encoding datum into Protobuf messages.
Messages are published with the MQTT retained flag set by default, which means the most recently published datum is saved by SolarFlux. When an application subscribes to a topic it will immediately receive any retained message for that topic. In this way, SolarFlux will provide a \"most recent\" snapshot of all datum across all nodes and sources.
Example SolarFlux datum message, expressed as JSON
The MQTT topic each datum is published to is derived from the node ID and datum source ID, according to this pattern:
node/N/datum/A/S\n
Pattern Element Description N The node ID the datum was captured on A An aggregation key; will be 0 for the \"raw\" datum captured in SolarNode S The datum source ID; note that any leading / in the source ID is stripped from the topic Example MQTT topics
"},{"location":"users/solarflux/#log-datum-stream","title":"Log datum stream","text":"
The EventAdmin Appender is supported, and log events are turned into a datum stream and published to SolarFlux. The log timestamps are used as the datum timestamps.
"},{"location":"users/solarflux/#log-datum-stream-topic-mapping","title":"Log datum stream topic mapping","text":"
The topic assigned to log events is log/ with the log name appended. Period characters (.) in the log name are replaced with slash characters (/). For example, a log name net.solarnetwork.node.datum.modbus.ModbusDatumDataSource will be turned into the topic log/net/solarnetwork/node/datum/modbus/ModbusDatumDataSource.
"},{"location":"users/solarflux/#log-datum-stream-properties","title":"Log datum stream properties","text":"
The datum stream consists of the following properties:
Property Class. Type Description levels String The log level name, e.g. TRACE, DEBUG, INFO, WARN, ERROR, or FATAL. priorityi Integer The log level priority (lower values have more priority), e.g. 600, 500, 400, 300, 200, or 100. names String The log name. msgs String The log message . exMsgs String An exception message, if an exception was included. exSts String A newline-delimited list of stack trace element values, if an exception was included."},{"location":"users/solarflux/#settings","title":"Settings","text":"
The SolarFlux Upload Service ships with default settings that work out-of-the-box without any configuration. There are many settings you can change to better suit your needs, however.
Each component configuration contains the following overall settings:
Setting Description Host The URI for the SolarFlux server to connect to. Normally this is influx.solarnetwork.net:8884. Username The MQTT username to use. Normally this is solarnode. Password The MQTT password to use. Normally this is not needed as the node's certificate it used for authentication. Exclude Properties A regular expression to match property names on all datum sources to exclude from publishing. Required Mode If configured, an operational mode that must be active for any data to be published. Maximum Republish If offline message persistence has been configured, then the maximum number of offline messages to publish in one go. See the offline persistence section for more information. Reliability The MQTT quality of service level to use. Normally the default of At most once is sufficient. Version The MQTT protocol version to use. Startig with version 5 MQTT topic aliases will be used if the server supports it, which can save a significant amount of network bandwidth when long source IDs are in use. Retained Toggle the MQTT retained message flag. When enabled the MQTT server will store the most recently published message on each topic so it is immediately available when clients connect. Wire Logging Toggle verbose logging on/off to support troubleshooting. The messages are logged to the net.solarnetwork.mqtt topic at DEBUG level. Filters Any number of datum filter configurations.
For TLS-encrypted connections, SolarNode will make the node's own X.509 certificate available for client authentication.
Each component can define any number of filters, which are used to manipulate the datum published to SolarFlux, such as:
restrict the frequency at which individual datum sources are published
restrict which properties of datum are posted
encode the message into something other than CBOR
The filter settings can be very useful to constrain how much data is sent to SolarFlux, for example on nodes using mobile internet connections where the cost of posting data is high.
A filter can configure a Datum Encoder to encode the MQTT message with, if you want to use a format other than the default CBOR encoding. This can be combined with a Source ID pattern to encode specific sources with specific encoders. For example when using the Protobuf Datum Encoder a single Protobuf message type is supported per encoder. If you want to encode different datum sources into different Protobuf messages, you would configure one encoder per message type, and then one filter per source ID with the corresponding encoder.
Note
All filters are applied in the order they are defined, and then the first filter with a Datum Encoder configured that matches the filter's Source ID pattern will be used to encode the datum. If not Datum Encoder is configured the default CBOR encoding will be used.
Each filter configuration contains the following settings:
Setting Description Source ID A case-insensitive regular expression to match against datum source IDs. If defined, this filter will only be applied to datum with matching source ID values. If not defined this filter will be applied to all datum. For example ^solar would match any source ID starting with solar. Datum Filter The Service Name of a Datum Filter component to apply before encoding and posting datum. Required Mode If configured, an operational mode that must be active for this filter to be applied. Datum Encoder The Service Name if a Datum Encoder component to encode datum with. The encoder will be passed a java.util.Map object with all the datum properties. If not configured then CBOR will be used. Limit Seconds The minimum number of seconds to limit datum that match the configured Source ID pattern. If datum are produced faster than this rate, they will be filtered out. Set to 0 or leave empty for no limit. Property Includes A list of case-insensitive regular expressions to match against datum property names. If configured, only properties that match one of these expressions will be included in the filtered output. For example ^watt would match any property starting with watt. Property Excludes A list of case-insensitive regular expressions to match against datum property names. If configured, any property that match one of these expressions will be excluded from the filtered output. For example ^temp would match any property starting with temp. Exclusions are applied after property inclusions.
Warning
The datum sourceId and created properties will be affected by the property include/exclude filters! If you define any include filters, you might want to add an include rule for ^created$. You might like to have sourceId removed to conserve bandwidth, given that value is part of the MQTT topic the datum is posted on and thus redundant.
By default if the connection to the SolarFlux server is down for any reason, all messages that would normally be published to the server will be discarded. This is suitable for most applications that rely on SolarFlux to view real-time status updates only, and SolarNode uploads datum to SolarNet for long-term persistence. For applications that rely on SolarFlux for more, it might be desirable to configure SolarNode to locally cache SolarFlux messages when the connection is down, and then publish those cached messages when the connection is restored. This can be accomplished by deploying the MQTT Persistence plugin.
When that plugin is available, all messages processed by this service will be saved locally when the MQTT connection is down, and then posted once the MQTT connection comes back up. Note the following points to consider:
The cached messages will be posted with the MQTT retained flag set to false.
The cached messages will be posted in an unspecified order.
The cached messages may be posted more than once, regardless of the configured Reliabiliy setting.
Datum Filters are services that manipulate datum generated by SolarNode plugins before they are uploaded to SolarNet. Datum Filters vary wildly in the functionality they provide; here are some examples of the things they can do:
Throttle the rate at which datum are saved to SolarNet
Remove unwanted properties from datum
Split a datum so some properties are moved to another datum stream
Join the properties of multiple datum streams into a single datum
Inject properties from external services
Derive new properties from dynamic expressions
Datum Filters do not create datum
It is helpful to remember that Datum Filters do not create datum, they only manipulate datum created elsewhere, typically by datum data sources.
There are four main places where datum filters can be applied:
On the Datum Queue, immediately after each datum is captured
As a Global Datum Filter, just before uploading to SolarNet
On the Global Datum Filter Chain, just before uploading to SolarNet
As a SolarFlux Datum Filter, just before uploading to SolarFlux
All datum generated by SolarNode plugins are added to the Datum Queue for processing. The datum are processed in the order they are added to the queue. Datum Filters are applied to each datum, each filter's result passed to the next available filter until all filters have been applied.
Conceptual diagram of the Datum Queue, processing datum along with filters manipulating them
At the end of processing, the datum is either
uploaded to SolarNet immediately, or
saved locally, to be uploaded at some point in the future
Most of the time datum are uploaded to SolarNet immediately after processing. If the network is down, or SolarNode is configured to only upload datum in batches, then datum are saved locally in SolarNode, and a periodic job will attempt to upload them later on, in batches.
See the Setup App Datum Queue section for information on how to configure the Datum Queue.
When to configure filters on the Datum Queue, as opposed to other places?
The Datum Queue is a great place to configure filters that must be processed at most once per datum, and do not depend on what time the datum is uploaded to SolarNet.
"},{"location":"users/datum-filters/#global-datum-filters","title":"Global Datum Filters","text":"
Global Datum Filters are applied to datum just before posting to SolarNetwork. Once an instance is created, it is automatically active and will be applied to datum. This differs from User Datum Filters, which must be explicitly added to a service to be used, either dircectly or indirectly with a Datum Filter Chain.
Note
Some filters support both Global and User based filter configuration, and often you can achieve the same overall result in multiple ways. Global filters are convenient for the subset of filters that support Global configuration, but for complex filtering often it can be easier to configure all filters as User filters, using the Global Datum Filter Chain as needed.
"},{"location":"users/datum-filters/#global-datum-filter-chain","title":"Global Datum Filter Chain","text":"
The Global Datum Filter Chain provides a way to apply explicit User Datum Filters to datum just before posting to SolarNetwork.
"},{"location":"users/datum-filters/#solarflux-datum-filters","title":"SolarFlux Datum Filters","text":"
The Datum Filter Chain is a User Datum Filter that you configure with a list, or chain, of other User Datum Filters. When the Filter Chain executes, it executes each of the configured Datum Filters, in the order defined. This filter can be used like any other Datum Filter, allowing multiple filters to be applied in a defined order.
A Filter Chain acts like an ordered group of Datum Filters
Tip
Some services support configuring only a single Datum Filter setting. You can use a Filter Chain to apply multiple filters in those services.
Each filter configuration contains the following overall settings:
Setting Description Available Filters A read-only list of Service Name values of User Datum Filter components that have been configured. You can copy any value from this list and paste it into the Datum Filters list to include that filter in the chain. Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Datum Filters The list of Service Name values of User Datum Filter components to apply to datum."},{"location":"users/datum-filters/control-updater/","title":"Control Updater Datum Filter","text":"
The Control Updater Datum Filter provides a way to update controls with the result of an expression, optionally populating the expression result as a datum property.
This filter is provided by the Standard Datum Filters plugin.
The screen shot shows a filter that would toggle the /power/switch/1 control on/off based on the frequency property in the /power/1 datum stream: on when the frequency is 50 or higher, off otherwise.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Control Configurations A list of control expression configurations.
Each control configuration contains the following settings:
Setting Description Control ID The ID of the control to update with the expression result. Property The optional datum property to store the expression result in. Property Type The datum property type to use. Expression The expression to evaluate. See below for more info. Expression Language The expression language to write Expression in."},{"location":"users/datum-filters/control-updater/#expressions","title":"Expressions","text":"
See the Expressions guide for general expressions reference. The root object is a DatumExpressionRoot that lets you treat all datum properties, and filter parameters, as expression variables directly.
"},{"location":"users/datum-filters/downsample/","title":"Downsample Datum Filter","text":"
The Downsample Datum Filter provides a way to down-sample higher-frequency datum samples into lower-frequency (averaged) datum samples. The filter will collect a configurable number of samples and then generate a down-sampled sample where an average of each collected instantaneous property is included. In addition minimum and maximum values of each averaged property are added.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/downsample/#settings","title":"Settings","text":"Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Sample Count The number of samples to average over. Decimal Scale A maximum number of digits after the decimal point to round to. Set to0 to round to whole numbers. Property Excludes A list of property names to exclude. Min Property Template A string format to use for computed minimum property values. Use %s as the placeholder for the original property name, e.g. %s_min. Max Property Template A string format to use for computed maximum property values. Use %s as the placeholder for the original property name, e.g. %s_max."},{"location":"users/datum-filters/expression/","title":"Expression Datum Filter","text":"
The Expression Datum Filter provides a way to generate new properties by evaluating expressions against existing properties.
This filter is provided by the Standard Datum Filters plugin.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Expressions A list of expression configurations that are evaluated to derive datum property values from other property values.
Use the + and - buttons to add/remove expression configurations.
Each expression configuration contains the following settings:
Setting Description Property The datum property to store the expression result in. Property Type The datum property type to use. Expression The expression to evaluate. See below for more info. Expression Language The expression language to write Expression in."},{"location":"users/datum-filters/expression/#expressions","title":"Expressions","text":"
See the SolarNode Expressions guide for general expressions reference. The root object is a DatumExpressionRoot that lets you treat all datum properties, and filter parameters, as expression variables directly, along with the following properties:
Property Type Description datumDatum A Datum object, populated with data from all property and virtual meter configurations. propsMap<String,Object> Simple Map based access to the properties in datum, and transform parameters, to simplify expressions.
The following methods are available:
Function Arguments Result Description has(name)Stringboolean Returns true if a property named name is defined. hasLatest(source)Stringboolean Returns true if a datum with source ID source is available via the latest(source) function. latest(source)StringDatumExpressionRoot for the latest available datum matching the given source ID, or null if not available."},{"location":"users/datum-filters/expression/#expression-examples","title":"Expression examples","text":"
Assuming a datum sample with properties like the following:
Property Value current7.6voltage240.1statusError
Then here are some example expressions and the results they would produce:
Expression Result Comment voltage * current1824.76 Simple multiplication of two properties. props['voltage'] * props['current']1824.76 Another way to write the previous expression. Can be useful if the property names contain non-alphanumeric characters, like spaces. has('frequency') ? 1 : nullnull Uses the ?: if/then/else operator to evaluate to null because the frequency property is not available. When an expression evaluates to null then no property will be added to the output samples. current > 7 or voltage > 245 ? 1 : null1 Uses comparison and logic operators to evaluate to 1 because current is greater than 7. voltage * currrent * (hasLatest('battery') ? 1.0 - latest('battery')['soc'] : 1)364.952 Assuming a battery datum with a soc property value of 0.8 then the expression resolves to 7.6 * 241.0 * (1.0 - 0.8)."},{"location":"users/datum-filters/join/","title":"Join Datum Filter","text":"
The Join Datum Filter provides a way to merge the properties of multiple datum streams into a new derived datum stream.
This filter is provided by the Standard Datum Filters plugin.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Output Source ID The source ID of the merged datum stream. Placeholders are allowed. Coalesce Threshold When 2 or more then wait until datum from this many different source IDs have been encountered before generating an output datum. Once a coalesced datum has been generated the tracking of input sources resets and another datum will only be generated after the threshold is met again. If 1 or less, then generate output datum for all input datum. Swallow Input If enabled, then filter out input datum after merging. Otherwise leave the input datum as-is. Source Property Mappings A list of source IDs with associated property name templates to rename the properties with. Each template must contain a {p} parameter which will be replaced by the property names merged from datum encountered with the associated source ID. For example {p}_s1 would map an input property watts to watts_s1.
Use the + and - buttons to add/remove expression configurations.
Each source property mapping configuration contains the following settings:
Setting Description Source ID A source ID pattern to apply the associated Mapping to. Any capture groups (parts of the pattern between () groups) are provided to the Mapping template. Mapping A property name template with a {p} parameter for an input property name to be mapped to a merged (output) property name. Pattern capture groups from Source ID are available starting with {1}. For example {p}_s1 would map an input property watts to watts_s1.
Unmapped properties are copied
If a matching source property mapping does not exist for an input datum source ID then the property names of that datum are used as-is.
The Source ID pattern can define capture groups that will be provided to the Mapping template as numbered parameters, starting with {1}. For example, assuming an input datum property watts, then:
Datum Source ID Source ID Pattern Mapping Result /power/main/power/{p}_mainwatts_main/power/1/power/(\\d+)${p}_s{1}watts_s1/power/2/power/(\\d+)${p}_s{1}watts_s2/solar/1/(\\w+)/(\\d+)${p}_{1}{2}watts_solar1
To help visualize property mapping with a more complete example, let's imagine we have some datum streams being collected and the most recent datum from each look like this:
/meter/1 /meter/2 /solar/1
{\n \"watts\": 3213,\n}
{\n \"watts\": -842,\n}
{\n \"watts\" : 4055,\n \"current\": 16.89583\n}
Here are some examples of how some source mapping expressions could be defined, including how multiple mappings can be used at once:
Source ID Patterns Mappings Result /(\\w+)/(\\d+){1}_{p}{2}
"},{"location":"users/datum-filters/op-mode/","title":"Operational Mode Datum Filter","text":"
The Operational Mode Datum Filter provides a way to evaluate expressions to toggle operational modes. When an expression evaluates to true the associated operational mode is activated. When an expression evaluates to false the associated operational mode is deactivated.
This filter is provided by the Standard Datum Filters plugin.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Expressions A list of expression configurations that are evaluated to toggle operational modes.
Use the + and - buttons to add/remove expression configurations.
Each expression configuration contains the following settings:
Setting Description Mode The operational mode to toggle. Expire Seconds If configured and greater than 0, the number of seconds after activating the operational mode to automatically deactivate it. If not configured or 0 then the operational mode will be deactivated when the expression evaluates to false. See below for more information. Property If configured, the datum property to store the expression result in. See below for more information. Property Type The datum property type to use if Property is configured. See below for more information. Expression The expression to evaluate. See below for more info. Expression Language The expression language to write Expression in."},{"location":"users/datum-filters/op-mode/#expire-setting","title":"Expire setting","text":"
When configured the expression will never deactivate the operational mode directly. When evaluating the given expression, if it evaluates to true the mode will be activated and configured to deactivate after this many seconds. If the operation mode was already active, the expiration will be extended by this many seconds.
This configuration can be thought of like a time out as used on motion-detecting lights: each time motion is detected the light is turned on (if not already on) and a timer set to turn the light off after so many seconds of no motion being detected.
Note that the operational modes service might actually deactivate the given mode a short time after the configured expiration.
A property does not have to be populated. If you provide a Property name to populate, the value of the datum property depends on property type configured:
Type Description Instantaneous The property value will be 1 or 0 based on true and false expression results. Status The property will be the expression result, so true or false. Tag A tag named as the configured property will be added if the expression is true, or removed if false."},{"location":"users/datum-filters/op-mode/#expressions","title":"Expressions","text":"
See the Expressions section for general expressions reference. The expression must evaluate to a boolean (true or false) result. When it evaluates to true the configured operational mode will be activated. When it evaluates to false the operational mode will be deactivated (unless an expire setting has been configured).
The root object is a datum samples expression object that lets you treat all datum properties, and filter parameters, as expression variables directly, along with the following properties:
Property Type Description datumGeneralNodeDatum A GeneralNodeDatum object, populated with data from all property and virtual meter configurations. propsMap<String,Object> Simple Map based access to the properties in datum, and transform parameters, to simplify expressions.
The following methods are available:
Function Arguments Result Description has(name)Stringboolean Returns true if a property named name is defined."},{"location":"users/datum-filters/op-mode/#expression-examples","title":"Expression examples","text":"
Assuming a datum sample with properties like the following:
Property Value current7.6voltage240.1statusError
Then here are some example expressions and the results they would produce:
Expression Result Comment voltage * current > 1800true Since voltage * current is 1824.76, the expression is true. status != 'Error'false Since status is Error the expression is false."},{"location":"users/datum-filters/parameter-expression/","title":"Parameter Expression Datum Filter","text":"
The Parameter Expression Datum Filter provides a way to generate filter parameters by evaluating expressions against existing properties. The generated parameters will be available to any further datum filters in the same filter chain.
Tip
Parameters are useful as temporary variables that you want to use during datum processing but do not want to include as datum properties that get posted to SolarNet.
This filter is provided by the Standard Datum Filters plugin.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Expressions A list of expression configurations that are evaluated to derive parameter values from other property values.
Use the + and - buttons to add/remove expression configurations.
Each expression configuration contains the following settings:
Setting Description Parameter The filter parameter name to store the expression result in. Expression The expression to evaluate. See below for more info. Expression Language The [expression language][expr-lang] to write Expression in."},{"location":"users/datum-filters/parameter-expression/#expressions","title":"Expressions","text":"
See the Expressions section for general expressions reference. This filter supports Datum Expressions that lets you treat all datum properties, and filter parameters, as expression variables directly.
"},{"location":"users/datum-filters/property/","title":"Property Datum Filter","text":"
The Property Datum Filter provides a way to remove properties of datum. This can help if some component generates properties that you don't actually need to use.
For example you might have a plugin that collects data from an AC power meter that capture power, energy, quality, and other properties each time a sample is taken. If you are only interested in capturing the power and energy properties you could use this component to remove all the others.
This component can also throttle individual properties over time, so that individual properties are posted less frequently than the rate the whole datum it is a part of is sampled at. For example a plugin for an AC power meter might collect datum once per minute, and you want to collect the energy properties of the datum every minute but the quality properties only once every 10 minutes.
The general idea for filtering properties is to configure rules that define which datum sources you want to filter, along with a list of properties to include and/or a list to exclude. All matching is done using regular expressions, which can help make your rules concise.
This filter is provided by the Standard Datum Filters plugin.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Property Includes A list of property names to include, removing all others. This is a list of case-insensitive patterns to match against datum property names. If any inclusion patterns are configured then only properties matching one of these patterns will be included in datum. Any property name that does not match one of these patterns will be removed. Property Excludes A list of property names to exclude. This is a list of case-insensitive patterns to match against datum property names. If any exclusion expressions are configured then any property that matches one of these expressions will be removed. Exclusion epxressions are processed after inclusion expressions when both are configured.
Use the + and - buttons to add/remove property include/exclude patterns.
Each property inclusion setting contains the following settings:
Setting Description Name The property name pattern to include. Limit Seconds A throttle limit, in seconds, to apply to included properties. The minimum number of seconds to limit properties that match the configured property inclusion pattern. If properties are produced faster than this rate, they will be filtered out. Leave empty (or 0) for no throttling."},{"location":"users/datum-filters/split/","title":"Split Datum Filter","text":"
The Split Datum Filter provides a way to split the properties of a datum stream into multiple new derived datum streams.
This filter is provided by the Standard Datum Filters plugin.
In the example screen shot shown above, the /power/meter/1 datum stream is split into two datum streams: /meter/1/power and /meter/1/energy. Properties with names containing current, voltage, or power (case-insensitive) will be copied to /meter/1/power. Properties with names containing hour (case-insensitive) will be copied to /meter/1/energy.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Swallow Input If enabled, then discard input datum after splitting. Otherwise leave the input datum as is. Property Source Mappings A list of property name regular expression with associated source IDs to copy matching properties to."},{"location":"users/datum-filters/split/#property-source-mappings-settings","title":"Property Source Mappings settings","text":"
Use the + and - buttons to add/remove Property Source Mapping configurations.
Each property source mapping configuration contains the following settings:
Setting Description Property A property name case-sensitive regular expression to match on the input datum stream. You can enable case-insensitive matching by including a (?i) prefix. Source ID The destination source ID to copy the matching properties to. Supports placeholders.
Tip
If multiple property name expressions match the same property name, that property will be copied to all the datum streams of the associated source IDs.
"},{"location":"users/datum-filters/tariff/","title":"Time-based Tariff Datum Filter","text":"
The Tariff Datum Filter provides a way to inject time-based tariff rates based on a flexible tariff schedule defined with various time constraints.
This filter is provided by the Tariff Filter plugin.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Metadata Service The Service Name of the Metadata Service to obtain the tariff schedule from. See below for more information. Metadata Path The metadata path that will resolve the tariff schedule from the configured Metadata Service. Language An IETF BCP 47 language tag to parse the tariff data with. If not configured then the default system language will be assumed. First Match If enabled, then apply only the first tariff that matches a given datum date. If disabled, then apply all tariffs that match. Schedule Cache The amount of seconds to cache the tariff schedule obtained from the configured Metadata Service. Tariff Evaluator The Service Name of a Time-based Tariff Evaluator service to evaluate each tariff to determine if it should apply to a given datum. If not configured a default algorithm is used that matches all non-empty constraints in an inclusive manner, except for the time-of-day constraint which uses an exclusive upper bound."},{"location":"users/datum-filters/tariff/#metadata-service","title":"Metadata Service","text":"
SolarNode provides a User Metadata Service component that this filter can use for the Metadata Service setting. This allows you to configure the tariff schedule as user metadata in SolarNetwork and then SolarNode will download the schedule and use it as needed.
You must configure a SolarNetwork security token to use the User Metadata Service. We recommend that you create a Data security token in SolarNetwork with a limited security policy that includes an API Path of just /users/meta and a User Metadata Path of something granular like /pm/tariffs/**. This will give SolarNode access to just the tariff metadata under the /pm/tariffs metadata path.
The SolarNetwork API Explorer can be used to add the necessary tariff schedule metadata to your account. For example:
The tariff schedule obtained from the configured Metadata Service uses a simple CSV-based format that can be easily exported from a spreadsheet. Each row represents a rule that includes:
a set of time constraints that must be satisfied for the rule to be applied
a list of tariff rates to be added to datum when the constraints are satisfied
Include a header row
A header row is required because the tariff rate names are defined there. The first 4 column names are ignored.
The schedule consists of 4 time constraint columns followed by one or more tariff rate columns. Each constraint is represented as a range, in the form start - end. Whitespace is allowed around the - character. If the start and end are the same, the range may be shortened to just start. A range can be left empty to represent all values. The time constraint columns are:
Column Constraint Description 1 Month range An inclusive month range. Months can be specified as numbers (1-12) or abbreviations (Jan-Dec) or full names (January - December). When using text names case does not matter and they will be parsed using the Lanauage setting. 2 Day range An inclusive day-of-month range. Days are specified as numbers (1-31). 3 Weekday range An inclusive day-of-week range. Weekdays can be specified as numbers (1-7) with Monday being 1 and Sunday being 7, or abbreviations (Mon-Sun) or full names (Monday - Sunday). When using text names case does not matter and they will be parsed using the Lanauage setting. 4 Time range An inclusive - exclusive time-of-day range. The time can be specified as whole hour numbers (0-24) or HH:MM style (00:00 - 24:00).
Starting on column 5 of the tariff schedule are arbitrary rate values to add to datum when the corresponding constraints are satisfied. The name of the datum property is derived from the header row of the column, adapted according to the following rules:
change to lower case
replacing any runs of non-alphanumeric or underscore with a single underscore
removing any leading/trailing underscores
Here are some examples of the header name to the equivalent property name:
Rate Header Name Datum Property Name TOU tou Foo Bar foo_bar This Isn't A Great Name! this_isn_t_a_great_name"},{"location":"users/datum-filters/tariff/#example-schedule","title":"Example schedule","text":"
Here's an example schedule with 4 rules and a single TOU rate (the * stands for all values):
Rule Month Day Weekday Time TOU 1 Jan-Dec * Mon-Fri 0-8 10.48 2 Jan-Dec * Mon-Fri 8-24 11.00 3 Jan-Dec * Sat-Sun 0-8 9.19 4 Jan-Dec * Sat-Sun 8-24 11.21
"},{"location":"users/datum-filters/throttle/","title":"Throttle Datum Filter","text":"
The Throttle Datum Filter provides a way to throttle entire datum over time, so that they are posted to SolarNetwork less frequently than a plugin that collects the data produces them. This can be useful if you need a plugin to collect data at a high frequency for use internally by SolarNode but don't need to save such high resolution of data in SolarNetwork. For example, a plugin that monitors a device and responds quickly to changes in the data might be configured to sample data every second, but you only want to capture that data once per minute in SolarNetwork.
The general idea for filtering datum is to configure rules that define which datum sources you want to filter, along with time limit to throttle matching datum by. Any datum matching the sources that are captured faster than the time limit will filtered and not uploaded to SolarNetwork.
This filter is provided by the Standard Datum Filters plugin.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Limit Seconds A throttle limit, in seconds, to apply to matching datum. The throttle limit is applied to datum by source ID. Before each datum is uploaded to SolarNetwork, the filter will check how long has elapsed since a datum with the same source ID was uploaded. If the elapsed time is less than the configured limit, the datum will not be uploaded."},{"location":"users/datum-filters/unchanged-property/","title":"Unchanged Property Filter","text":"
The Unchanged Property Filter provides a way to discard individual datum properties that have not changed within a datum stream.
This filter is provided by the Standard Datum Filters plugin.
Tip
See the Unchanged Datum Filter for a filter that can discard entire unchanging datum (at the source ID level).
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Default Unchanged Max Seconds When greater than 0 then the maximum number of seconds to discard unchanged properties within a single datum stream (source ID). Use this setting to ensure a property is included occasionally, even if the property value has not changed. Having at least one value per hour in a datum stream is recommended. This time period is always relative to the last unfiltered property within a given datum stream seen by the filter. Property Configurations A list of property settings."},{"location":"users/datum-filters/unchanged-property/#property-settings","title":"Property Settings","text":"
Use the + and - buttons to add/remove Property configurations.
Each property source mapping configuration contains the following settings:
Setting Description Property A regular expression pattern to match against datum property names. All matching properties will be filtered. Unchanged Max Seconds When greater than 0 then the maximum number of seconds to discard unchanged properties within a single datum stream (source ID). This can be used to override the filter-wide Default Unchanged Max Seconds setting, or left blank to use the default value."},{"location":"users/datum-filters/unchanged/","title":"Unchanged Datum Filter","text":"
The Unchanged Datum Filter provides a way to discard entire datum that have not changed within a datum stream.
This filter is provided by the Standard Datum Filters plugin.
Tip
See the Unchanged Property Filter for a filter that can discard individual unchanging properties within a datum stream.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Unchanged Max Seconds When greater than 0 then the maximum number of seconds to refrain from publishing an unchanged datum within a single datum stream. Use this setting to ensure a datum is included occasionally, even if the datum properties have not changed. Having at least one value per hour in a datum stream is recommended. This time period is always relative to the last unfiltered property within a given datum stream seen by the filter. Property Pattern A property name pattern that limits the properties monitored for changes. Only property names that match this expression will be considered when determining if a datum differs from the previous datum within the datum stream."},{"location":"users/datum-filters/virtual-meter/","title":"Virtual Meter Datum Filter","text":"
The Virtual Meter Datum Filter provides a way to derive an accumulating \"meter reading\" value out of an instantaneous property value over time. For example, if you have an irradiance sensor that allows you to capture instantaneous W/m2 power values, you could configure a virtual meter to generate Wh/m2 energy values.
Each virtual meter works with a single input datum property, typically an instantaneous property. The derived accumulating datum property will be named after that property with the time unit suffix appended. For example, an instantaneous irradiance property using the Hours time unit would result in an accumulating irradianceHours property. The value is calculated as an average between the current and the previous instantaneous property values, multiplied by the amount of time that has elapsed between the two samples.
This filter is provided by the Standard Datum Filters plugin.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Virtual Meters Configure as many virtual meters as you like, using the + and - buttons to add/remove meter configurations."},{"location":"users/datum-filters/virtual-meter/#virtual-meter-settings","title":"Virtual Meter Settings","text":"
The Virtual Meter settings define a single virutal meter.
Setting Description Property The name of the input datum property to derive the virtual meter values from. Property Type The type of the input datum property. Typically this will be Instantaneous but when combined with an expression an Accumulating property can be used. Reading Property The name of the output meter accumulating datum property to generate. Leave empty for a default name derived from Property and Time Unit. For example, an instantaneous irradiance property using the Hours time unit would result in an accumulating irradianceHours property. Time Unit The time unit to record meter readings as. This value affects the name of the virtual meter reading property if Reading Property is left blank: it will be appended to the end of Property Name. It also affects the virtual meter output reading values, as they will be calculated in this time unit. Max Age The maximum time allowed between samples where the meter reading can advance. In case the node is not collecting samples for a period of time, this setting prevents the plugin from calculating an unexpectedly large reading value jump. For example if a node was turned off for a day, the first sample it captures when turned back on would otherwise advance the reading as if the associated instantaneous property had been active over that entire time. With this restriction, the node will record the new sample date and value, but not advance the meter reading until another sample is captured within this time period. Decimal Scale A maximum number of digits after the decimal point to round to. Set to 0 to round to whole numbers. Track Only On Change When enabled, then only update the previous reading date if the new reading value differs from the previous one. Rolling Average Count A count of samples to average the property value from. When set to something greater than 1, then apply a rolling average of this many property samples and output that value as the instantaneous source property value. This has the effect of smoothing the instantaneous values to an average over the time period leading into each output sample. Defaults to 0 so no rolling average is applied. Add Instantaneous Difference When enabled, then include an output instantaneous property of the difference between the current and previous reading values. Instantaneous Difference Property The derived output instantaneous datum property name to use when Add Instantaneous Difference is enabled. By default this property will be derived from the Reading Property value with Diff appended. Reading Value You can reset the virtual meter reading value with this setting. Note this is an advanced operation. If you submit a value for this setting, the virtual meter reading will be reset to this value such that the next datum the reading is calculated for will use this as the current meter reading. This will impact the datum stream's reported aggregate values, so you should be very sure this is something you want to do. For example if the virtual meter was at 1000 and you reset it 0 then that will appear as a -1000 drop in whatever the reading is measuring. If this occurs you can create a Reset Datum auxiliary record to accomodate the reset value. Expressions Configure as many expressions as you like, using the + and - buttons to add/remove expression configurations."},{"location":"users/datum-filters/virtual-meter/#virtual-meter-expression-settings","title":"Virtual Meter Expression Settings","text":"
A virtual meter can use expressions to customise how the output meter value reading value is calculated. See the Expressions section for more information.
Setting Description Property The datum property to store the expression result in. This must match the Reading Property of a meter configuration. Keep in mind that if Reading Property is blank, the implied value is derived from Property and Time Unit. Property Type The datum property type to use. Expression The expression to evaluate. See below for more info. Expression Language The expression language to write Expression in."},{"location":"users/datum-filters/virtual-meter/#filter-parameters","title":"Filter parameters","text":"
When the virtual meter filter is applied to a given datum, it will generate the following filter parameters, which will be available to other filters that are applied to the same datum after this filter.
Parameter Description {inputPropertyName}_diff The difference between the current input property value and the previous input property value. The {inputPropertyName} part of the parameter name will be replaced by the actual input property name. For example irradiance_diff. {meterPropertyName}_diff The difference between the current output meter property value and the previous output meter property value. The {meterPropertyName} part of the parameter name will be replaced by the actual output meter property name. For example irradianceHours_diff."},{"location":"users/datum-filters/virtual-meter/#expressions","title":"Expressions","text":"
Expressions can be configured to calculate the output meter datum property, instead of using the default averaging algorithm. If an expression configuration exists with a Property that matches a configured (or implied) meter configuration Reading Property, then the expression will be invoked to generate the new meter reading value. See the Expressions guide for general expression language reference.
Warning
It is important to remember that the expression must calculate the next meter reading value. Typically this means it will calculate some differential value based on the amount of time that has elapsed and add that to the previous meter reading value.
The root object is a virtual meter expression object that lets you treat all datum properties, and filter parameters, as expression variables directly, along with the following properties:
Property Type Description configVirtualMeterConfig A VirtualMeterConfig object for the virtual meter configuration the expression is evaluating for. datumGeneralNodeDatum A Datum object, populated with data from all property and virtual meter configurations. propsMap<String,Object> Simple Map based access to the properties in datum, and transform parameters, to simplify expressions. currDatelong The current datum timestamp, as a millisecond epoch number. prevDatelong The previous datum timestamp, as a millisecond epoch number. timeUnitsdecimal A decimal number of the difference between currDate and prevDate in the virtual meter configuration's Time Unit, rounded to at most 12 decimal digits. currInputdecimal The current input property value. prevInputdecimal The previous input property value. inputDiffdecimal The difference between the currInput and prevInput values. prevReadingdecimal The previous output meter property value.
The following methods are available:
Function Arguments Result Description has(name)Stringboolean Returns true if a property named name is defined. timeUnits(scale)intdecimal Like the timeUnits property but rounded to a specific number of decimal digits."},{"location":"users/datum-filters/virtual-meter/#expression-example-time-of-use-tariff-reading","title":"Expression example: time of use tariff reading","text":"
Iagine you'd like to track a time-of-use cost associated with the energy readings captured by an energy meter. The Time-based Tariff Datum Filter filter could be used to add a tou property to each datum, and then a virtual meter expression can be used to calculate a cost reading property. The cost property will be an accumulating property like any meter reading, so when SolarNetwork aggregates its value over time you will see the effective cost over each aggregate time period.
Here is a screen shot of the settings used for this scenario (note how the Reading Property value matches the Expression Property value):
The important settings to note are:
Setting Notes Virtual Meter - Property The input datum property is set to wattHours because we want to track changes in this property over time. Virtual Meter - Property Type We use Accumulating here because that is the type of property wattHours is. Virtual Meter - Reading Property The output reading property name. This must match the Expression - Property setting. Expression - Property This must match the Virtual Meter - Reading Property we want to evaluate the expression for. Expression - Property Type Typically this should be Accumulating since we are generating a meter reading style property. Expression - Expression The expression to evaluate. This expression looks for the tou property and when found the meter reading is incremented by the difference between the current and previous input wattHours property values multiplied by tou. If tou is not available, then the previous meter reading value is returned (leaving the reading unchanged).
Assuming a datum sample with properties like the following:
Property Value tou11.00currDate1621380669005prevDate1621380609005timeUnits0.016666666667currInput6095574prevInput6095462inputDiff112prevReading1022.782
Then here are some example expressions and the results they would produce:
Expression Result Comment inputDiff / 10000.112 Convert the input Wh property difference to kWh. inputDiff / 1000 * tou1.232 Multiply the input kWh by the the $/kWh tariff value to calculate the cost for the elapsed time period. prevReading + (inputDiff / 1000 * tou)1,024.014 Add the additional cost to the previous meter reading value to reach the new meter value."},{"location":"users/setup-app/","title":"Setup App","text":"
The SolarNode Setup App allows you to manage SolarNode through a web browser.
To access the Setup App, you need to know the network address of your SolarNode. In many cases you can try accessing http://solarnode/. If that does not work, you need to find the network address SolarNode is using.
Here is an example screen shot of the SolarNode Setup App:
You must log in to SolarNode to access its functions. The login credentials will have been created when you first set up SolarNode and associated it with your SolarNetwork account. The default Username will be your SolarNetwork account email address, and the password will have been randomly generated and shown to you.
Tip
You can change your SolarNode username and password after logging in. Note these credentials are not related, or tied to, your SolarNetwork login credentials.
The profile menu in the top-right of the Setup App menu give you access to change you password, change you username, logout, restart, and reset SolarNode.
Tip
Your SolarNode credentials are not related, or tied to, your SolarNetwork login credentials. Changing your SolarNode username or password does not change your SolarNetwork credentials.
Choosing the Change Password menu item will take you to a form for changing your password. Fill in your current password and then your new password, then click the Submit Password button.
The Change Password form
As a result, you will stay on the same page, but a success (or error) message will be shown above the form:
Choosing the Change Username menu item will take you to a form for changing your SolarNode username. Fill in your current password and then your new password, then click the Change Username button.
The Change Username form
As a result, you will stay on the same page, but a success (or error) message will be shown above the form:
You can either restart or reboot SolarNode from the Restart SolarNode menu. A restart means the SolarNode app will restart, while a reboot means the entire SolarNodeOS device will shut down and boot up again (restarting SolarNode along the way).
You might need to restart SolarNode to pick up new plugins you've installed, and you might need to reboot SolarNode if you've attached new sensors or other devices that require operating system support.
You can perform a \"factory reset\" of SolarNode to remove all your custom settings, certificate, login credentials, and so on. You also have the option to preserve some SolarNodeOS settings like WiFi credentials if you like.
The Components section lists all the configurable multi-instance components available on your SolarNode. Multi-instance means you can configure any number of a given component, each with their own settings.
For example imagine you want to collect data from a power meter, solar inverter, and weather station, all of which use the Modbus protocol. To do that you would configure three instances of the Modbus Device component, one for each device.
Use the Manage button for any listed compoennt to add or remove instances of that component.
An instance count badge appears next to any component with at least one instance configured.
The Backup & Restore section lets you manage SolarNode backups. Each backup contains a snapshot of the settings you have configured, the node's certificate, and custom plugins.
"},{"location":"users/setup-app/settings/#file-system-backup-service","title":"File System Backup Service","text":"
The File System Backup Service is the default Backup Service provided by SolarNode. It saves the backup onto the node itself. In order to be able to restore your settings if the node is damaged or lost, you must download a copy of a backup using the Download button, and save the file to a safe place.
Warning
If you do not download a copy of a backup, you run the risk of losing your settings and node certificate, making it impossible to restore the node in the event of a catastrophic hardware failure.
The configurable settings of the File System Backup Service are:
Setting Description Backup Directory The folder (on the node) where the backups will be saved. Copies The number of backup copies to keep, before deleting the oldest backup."},{"location":"users/setup-app/settings/#s3-backup-service","title":"S3 Backup Service","text":"
The S3 Backup Service creates cloud-based backups in AWS S3 (or any compatible provider). You must configure the credentials and S3 location details to use before any backups can be created.
Note
The S3 Backup Service requires the S3 Backup Service Plugin.
The configurable settings of the S3 Backup Service are:
Setting Description AWS Token The AWS access token to authenticate with. AWS Secret The AWS access token secret to authenticate with. AWS Region The name of the Amazon region to use, for example us-west-2. S3 Bucket The name of the S3 bucket to use. S3 Path An optional root path to use for all backup data (typically a folder location). Storage Class A supported storage class, such as STANDARD (the default), STANDARD_IA, INTELLIGENT_TIERING, REDUCED_REDUNDANCY, and so on. Copies The number of backup copies to keep, before deleting the oldest backup. Cache Seconds The amount of time to cache backup metadata such as the list of available backups, in seconds."},{"location":"users/setup-app/settings/#settings-backup-restore","title":"Settings Backup & Restore","text":"
The Settings Backup & Restore section provides a way to manage Settings Files and Settings Resources, both of which are backups for the configured settings in SolarNode.
Warning
Settings Files and Settings Resources do not include the node's certificate or custom plugins. See the Backup & Restore section for managing \"full\" backups that do include those items.
The Export button allows you to download a Settings File with the currently active configuration.
The Import button allows you to upload a previously-downloaded Settings File.
The Settings Resource menu and associated Export to file button allow you to download specialized settings files, offered by some components in SolarNode.
The Auto backups area will have a list of buttons, each of which will let you download a Settings File that SolarNode automatically created. Each button shows you the date the settings backup was created.
Datum Filters are services that manipulate datum generated by SolarNode plugins before they are uploaded to SolarNet. See the general Datum Filters section for more information about how datum filters work and what they are used for.
"},{"location":"users/setup-app/settings/datum-filters/#global-datum-filters","title":"Global Datum Filters","text":"
Global Datum Filters are applied to datum just before posting to SolarNetwork. Once an instance is created, it is automatically active and will be applied to datum. This differs from User Datum Filters, which must be explicitly added to a service to be used, either dircectly or indirectly with a Datum Filter Chain.
Click the Manage button next to any Global Datum Filter component to create, update, and remove instances of that filter.
All datum generated by SolarNode plugins are added to the Datum Queue for processing. The datum are processed in the order they are added to the queue. Datum Filters are applied to each datum, each filter's result passed to the next available filter until all filters have been applied.
The Datum Queue section of the Datum Filters page shows you some processing statistics and has a couple of settings you can change:
Setting Description Delay The minimum amount of time to delay processing datum after they have been added to the queue, in milliseconds. A small amount of delay allows parallel datum collection to get processed more reliably in time-based order. The default is 200 ms and usually does not need to be changed. Datum Filter The Service Name of a Datum Filter component to process datum with. See below for more information.
The Datum Filter setting allows you to configure a single Datum Filter to apply to every datum captured in SolarNode. Since you can only configure one filter, it is very common to configure a Datum Filter Chain, where you can then configure any number of other filters to apply.
"},{"location":"users/setup-app/settings/datum-filters/#global-datum-filter-chain","title":"Global Datum Filter Chain","text":"
The Global Datum Filter Chain provides a way to apply explicit User Datum Filters to datum just before posting to SolarNetwork.
Setting Description Active Global Filters A read-only list of any created Global Datum Filter component Service Name values. These filters are automatically applied, without needing to explicitly reference them in the Datum Filters list. Available User Filters A read-only list of Service Name values of User Datum Filter components that have been configured. You can copy any value from this list and paste it into the Datum Filters list to activate that filter. Datum Filters The list of Service Name values of User Datum Filter components to apply to datum."},{"location":"users/setup-app/settings/datum-filters/#user-datum-filters","title":"User Datum Filters","text":"
User Datum Filters are not applied automatically: they must be explicitly added to a service to be used, either dircectly or indirectly with a Datum Filter Chain. This differs from Global Datum Filters which are automatically applied to datum just before being uploaded to SolarNet.
Click the Manage button next to any User Datum Filter component to create, update, and remove instances of that filter.
The SolarNode UI supports configuring logger levels dynamically, without having to change the logging configuration file.
Warning
When SolarNode restarts all changes made in the Logger UI will be lost and the logger configuration will revert to whatever is configured in the logging configuration file.
The Logging page lists all the configured logger levels and lets you add new loggers and edit the existing ones using a simple form.
The component management page is shown when you click the Manage button for a multi-instance component. Each component instance's settings are independent, allowing you to integrate with multiple copies of a device or service.
For example if you connected a Modbus power meter and a Modbus solar inverter to a node, you would create two Modbus Device component instances, and configure them with settings appropriate for each device.
The component management screen allows you to add, update, and remove component instances.
"},{"location":"users/setup-app/settings/manage-component/#add-new-instance","title":"Add new instance","text":"
Add new component instances by clicking the Add new X button in the top-right, where X is the name of the component you are managing. You will be given the opportunity to assign a unique identifier to the new component instance:
When creating a new component instance you can provide a short name to identify it with.
When you add more than one component instance, the identifiers appear as clickable buttons that allow you to switch between the setting forms for each component.
Component instance buttons let you switch between each component instance.
After making changes to any component instance's settings, the Save All Changes button in the top-left.
Save All Changes works across all component instances
You can safely switch between and make changes on multiple component instance settings before clicking the Save All Changes button: your changes across all instances will be saved.
"},{"location":"users/setup-app/settings/manage-component/#remove-or-reset-instances","title":"Remove or reset instances","text":"
At the bottom of each component instance are buttons that let you delete or reset that component intance.
Buttons to delete or reset component instance.
The Delete button will remove that component instance from appearing, however the settings associated with that instance are preserved. If you re-add an instance with the same identifier then the previous settings will be restored. You can think of the Delete button as disabling the component, giving you the option to \"undo\" the deletion if you like.
The Restore button will reset the component to its factory defaults, removing any settings you have customized on that instance. The instance remains visible and you can re-configure the settings as needed.
"},{"location":"users/setup-app/settings/manage-component/#remove-all-instances","title":"Remove all instances","text":"
The Remove all button in the top-right of the page allows you to remove all component instances, including any customized settings on those instances.
The SolarNode UI will show the list of active Operational Modes on the Settings > Operational Modes page. Click the + button to activate modes, and the button to deactivate an active mode.
The main Settings page also shows a read-only view of the active modes:
SolarNode includes a Command Console page where troubleshooting commands from supporting plugins are displayed. The page shows a list of available command topics and lets you toggle the inclusion of each topic's commands at the bottom of the page.
The Modbus TCP Connection and Modbus Serial Connection components support publishing mbpoll commands under a modbus command topic. The mbpoll utility is included in SolarNodeOS; if not already installed you can install it by logging in to the SolarNodeOS shell and running the following command:
sudo apt install mbpoll\n
Modbus command logging must be enabled on each Modbus Connection component by toggling the CLI Publishing setting on.
Once CLI Publishing has been enabled, every Modbus request made on that connection will generate an equivalent mbpoll command, and those commands will be shown on the Command Console.
You can copy any logged command and paste that into a SolarNodeOS shell to execute the Modbus request and see the results.
SolarNode runs on SolarNodeOS, a Debian Linux-based operating system. If you are already familiar with Debian Linux, or one of the other Linux distributions built from Debian like Ubuntu Linux, you will find it pretty easy to get around in SolarNodeOS.
"},{"location":"users/sysadmin/#system-user-account","title":"System User Account","text":"
SolarNodeOS ships with a solar user account that you can use to log into the operating system. The default password is solar but may have been changed by a system administrator.
Warning
The solar user account is not related to the account you log into the SolarNode Setup App with.
"},{"location":"users/sysadmin/#change-system-user-account-password","title":"Change system user account password","text":"
To change the system user account's password, use the passwd command.
Changing the system user account password
$ passwd\nChanging password for solar.\nCurrent password:\nNew password:\nRetype new password:\npasswd: password updated successfully\n
Tip
Changing the solar user's password is highly recommended when you first deploy a node.
Some commands require administrative permission. The solar user can execute arbitrary commands with administrative permission by prefixing the command with sudo. For example the reboot command will reboot SolarNodeOS, but requires administrative permission.
Run a command as a system administrator
$ sudo reboot\n
The sudo command will prompt you for the solar user's password and then execute the given command as the administrator user root.
The solar user can also become the root administrator user by way of the su command:
Gain system administrative privledges with su
$ sudo su -\n
Once you have become the root user you no longer need to use the sudo command, as you already have administrative permissions.
"},{"location":"users/sysadmin/#network-access-with-ssh","title":"Network Access with SSH","text":"
SolarNodeOS comes with an SSH service active, which allows you to remotely connect and access the command line, using any SSH client.
"},{"location":"users/sysadmin/date-time/","title":"Date and Time","text":"
SolarNodeOS includes date and time management functions through the timedatectl command. Run timedatectl status to view information about the current date and time settings.
Viewing the current date and time settings
$ timedatectl status\n Local time: Fri 2023-05-26 03:41:42 BST\n Universal time: Fri 2023-05-26 02:41:42 UTC\n RTC time: n/a\n Time zone: Europe/London (BST, +0100)\nSystem clock synchronized: yes\n NTP service: active\n RTC in local TZ: no\n
"},{"location":"users/sysadmin/date-time/#changing-the-local-time-zone","title":"Changing the local time zone","text":"
SolarNodeOS uses the UTC time zone by default. If you would like to change this, use the timedatectl set-timezone
You can list the available time zone names by running timedatectl list-timezones.
"},{"location":"users/sysadmin/date-time/#internet-time-synchronization","title":"Internet time synchronization","text":"
SolarNodeOS uses the systemd-timesyncd service to synchronize the node's clock with internet time servers. Normally no configuration is necessary. You can check the status of the network time synchronization with timedatectl like:
$ timedatectl status\n Local time: Fri 2023-05-26 03:41:42 BST\n Universal time: Fri 2023-05-26 02:41:42 UTC\n RTC time: n/a\n Time zone: Europe/London (BST, +0100)\nSystem clock synchronized: yes\n NTP service: active\n RTC in local TZ: no\n
Warning
For internet time synchronization to work, SolarNode needs to access Network Time Protocol (NTP) servers, using UDP over port 123.
"},{"location":"users/sysadmin/date-time/#network-time-server-configuration","title":"Network time server configuration","text":"
The NTP servers that SolarNodeOS uses are configured in the /etc/systemd/timesyncd.conf file. The default configuration uses a pool of Debian servers, which should be suitable for most nodes. If you would like to change the configuration, edit the timesyncd.conf file and change the NTP= line, for example
Configuring the NTP servers to use
[Time]\nNTP=my.ntp.example.com\n
"},{"location":"users/sysadmin/date-time/#setting-the-date-and-time","title":"Setting the date and time","text":"
In order to manually set the date and time, NTP time synchronization must be disabled with timedatectl set-ntp false. Then you can run timedatectl set-time to set the date:
SolarNodeOS uses the systemd-networkd service to manage network devices and their settings. A network device relates to a physical network hardware device or a software networking component, as recognized and named by the operating system. For example, the first available ethernet device is typically named eth0 and the first available WiFi device wlan0.
Network configuration is stored in .network files in the /etc/systemd/network directory. SolarNodeOS comes with default support for ethernet and WiFi network devices.
The default 10-eth.network file configures the default ethernet network eth0 to use DHCP to automatically obtain a network address, routing information, and DNS servers to use.
SolarNodeOS networks are configured to use DHCP by default. If you need to re-configure a network to use DHCP, change the configuration to look like this:
If you need to use a static network address, instead of DHCP, edit the network configuration file (for example, the 10-eth.network file for the ethernet network), and change it to look like this:
Ethernet network with static address configuration
Use Name, DNS, Address, and Gateway values specific to your network. The same static configuration for a single address can also be specified in a slightly more condensed form, moving everything into the [Network] section:
Ethernet network with condensed single static address configuration
The default 20-wlan.network file configures the default WiFi network wlan0 to use DHCP to automatically obtain a network address, routing information, and DNS servers to use. To configure the WiFi network SolarNode should connect to, run this command:
Configuring the SolarNode WiFi network
sudo dpkg-reconfigure sn-wifi\n
You will then be prompted to supply the following WiFi settings:
Country code, e.g. NZ
WiFi network name (SSID)
WiFi network password
Note about WiFi support
WiFi support is provided by the sn-wifi package, which may not be installed. See the Package Maintenance section for information about installing packages.
"},{"location":"users/sysadmin/networking/#wifi-auto-access-point-mode","title":"WiFi Auto Access Point mode","text":"
For initial setup of a the WiFi settings on a SolarNode it can be helpful for SolarNode to create its own WiFi network, as an access point. The sn-wifi-autoap@wlan0 service can be used for this. When enabled, it will monitor the WiFi network status, and when the WiFi connection fails for any reason it will enable a SolarNode WiFi network using a gateway IP address of 192.168.16.1. Thus when the SolarNode access point is enabled, you can connect to that network from your own device and reach the Setup App at http://192.168.16.1/ or the command line via ssh solar@192.168.16.1.
The default 21-wlan-ap.network file configures the default WiFi network wlan0 to act as an Access Point
This service is not enabled by default. To enable it, run the following:
Once enabled, if SolarNode cannot connect to the configured WiFi network, it will create its own SolarNode network. By default the password for this network is solarnode. The Access Point network configuration is defined in the /etc/network/wpa_supplicant-wlan0.conf file, in a section like this:
SolarNodeOS uses the nftables system to provide an IP firewall to SolarNode. By default only the following incoming TCP ports are allowed:
Port Description 22 SSH access 80 HTTP SolarNode UI 8080 HTTP SolarNode UI alternate port"},{"location":"users/sysadmin/networking/#open-additional-ip-ports","title":"Open additional IP ports","text":"
You can edit the /etc/nftables.conf file to add additional open IP ports as needed. A good place to insert new rules is after the lines that open ports 80 and 8080:
SolarNodeOS supports a wide variety of software packages. You can install new packages as well as apply package updates as they become available. The apt command performs these tasks.
For SolarNodeOS to know what packages, or package updates, are available, you need to periodically update the available package information. This is done with the apt update command:
Update package information
sudo apt update # (1)!\n
The sudo command runs other commands with administrative privledges. It will prompt you for your user account password (typically the solar user).
To see if there are any package updates available, run apt list like this:
List packages with updates available
apt list --upgradable\n
If there are updates available, that will show them. You can apply all package updates with the apt upgrade command, like this:
Upgrade all packages
sudo apt upgrade\n
If you want to install an update for a specific package, use the apt install command instead.
Tip
The apt upgrade command will update existing packages and install packages that are required by those packages, but it will never remove an existing package. Sometimes you will want to allow packages to be removed during the upgrade process; to do that use the apt full-upgrade command.
"},{"location":"users/sysadmin/packages/#search-for-packages","title":"Search for packages","text":"
Use the apt search command to search for packages. By default this will match package names and their descriptions. You can search just for package names by including a --names-only argument.
Search for packages
# search for \"name\" across package names and descriptions\napt search name\n\n# search for \"name\" across package names only\napt search --names-only name\n\n# multiple search terms are logically \"and\"-ed together\napt search name1 name2\n
You can remove packages with the apt remove command. That command will preserve any system configuration associated with the package(s); if you would like to also remove that you can use the apt purge command.
Removing packages
sudo apt remove mypackage\n\n# use `purge` to also remove configuration\nsudo apt purge mypackage\n
SolarNode is managed as a systemd service. There are some shortcut commands to more easily manage the service.
Command Description sn-start Start the SolarNode service. sn-restart Restart the SolarNode service. sn-status View status information about the SolarNode service (see if it is running or not). sn-stop Stop the SolarNode service.
The sn-stop command requires administrative permissions, so you may be prompted for your system account password (usually the solar user's password).
"},{"location":"users/sysadmin/solarnode-service/#solarnode-service-environment","title":"SolarNode service environment","text":"
You can modify the environment variables passed to the SolarNode service, as well as modify the Java runtime options used. You may want to do this, for example, to turn on Java remote debugging support or to give the SolarNode process more memory.
The systemd solarnode.service unit will load the /etc/solarnode/env.conf environment configuration file if it is present. You can define arbitrary environment variables using a simple key=value syntax.
SolarNodeOS ships with a /etc/solarnode/env.conf.example file you can use for reference.
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"SolarNode Handbook","text":"
This handbook provides guides and reference documentation about SolarNode, the distributed computing part of SolarNetwork.
SolarNode is the swiss army knife for IoT monitoring and control. It is deployed on inexpensive computers in homes, buildings, vehicles, and even EV chargers, connected to any number of sensors, meters, building automation systems, and more. There are several SolarNode icons in the image below. Can you spot them all?
You can enable Java remote debugging for SolarNode on a node device for SolarNode plugin development or troubleshooting by modifying the SolarNode service environment. Once enabled, you can use SSH port forwarding to enable Java remote debugging in your Java IDE of choice.
To enable Java remote debugging, copy the /etc/solarnode/env.conf.example file to /etc/solarnode/env.conf. The example already includes this support, using port 9142 for the debugging port. Then restart the solarnode service:
Creating a custom SolarNode environment with debugging support
Then you can use ssh from your development machine to forward a local port to the node's 9142 port, and then have your favorite IDE establish a remote debugging connection on your local port.
For example, on a Linux or macOS machine you could forward port 8000 to a node's port 9142 like this:
Creating a port-forwarding SSH connection from a development machine to SolarNode
$ ssh -L8000:localhost:9142 solar@solarnode\n
Once that ssh connection is established, your IDE can be used to connect to localhost:8000 for a remote Java debugging session.
The SolarNode platform has been designed to be highly modular and dynamic, by using a plugin-based architecture. The plugin system SolarNode uses is based on the OSGi specification, where plugins are implemented as OSGi bundles. SolarNode can be thought of as a collection of OSGi bundles that, when combined and deployed together in an OSGi framework like Eclipse Equinox, form the complete SolarNode platform.
To summarize: everything in SolarNode is a plugin!
OSGi bundles and Eclipse plug-ins
Each OSGi bundle in SolarNode comes configured as an Eclipse IDE (or simply Eclipse) plug-in project. Eclipse refers to OSGi bundles as \"plug-ins\" and its OSGi development tools are collectively known as the Plug-in Development Environment, or PDE for short. We use the terms bundle and plug-in and plugin somewhat interchangably in the SolarNode project. Although Eclipse is not actually required for SolarNode development, it is very convenient.
Practically speaking a plugin, which is an OSGi bundle, is simply a Java JAR file that includes the Java code implementing your plugin and some OSGi metadata in its Manifest. For example, here is the contents of the net.solarnetwork.common.jdt plugin JAR:
Central to the plugin architecture SolarNode uses is the concept of a service. In SolarNode a service is defined by a Java interface. A plugin can advertise a service to the SolarNode runtime. Plugins can lookup a service in the SolarNode runtime and then invoke the methods defined on it.
The advertising/lookup framework SolarNode uses is provided by OSGi. OSGi provides several ways to manage services. In SolarNode the most common is to use Blueprint XML documents to both publish services (advertise) and acquire references to services (lookup).
The Gemini Blueprint implementation provides some useful extensions that SolarNode makes frequent use of. To use the extensions you need to declare the Gemini Blueprint Compendium namespace in your Blueprint XML file, like this:
This example declares the Gemini Blueprint Compendium XML namespace prefix osgix and a related Spring Beans namespace prefix beans. You will see those used throughout SolarNode.
Managed Properties provide a way to use the Configuration Admin service to manage user-configurable service properties. Conceptually it is like linking a class to a set of dynamic runtime Settings: Configuration Admin provides change event and persistence APIs for the settings, and the Managed Properties applies those settings to the linked service.
Imagine you have a service class MyService with a configurable property level. We can make that property a managed, persistable setting by adding a <osgix:managed-properties> element to our Blueprint XML, like this:
MyService classMyService localizationBlueprint XML
package com.example;\n\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.Map;\nimport net.solarnetwork.node.service.support.BaseIdentifiable;\nimport net.solarnetwork.settings.SettingSpecifier;\nimport net.solarnetwork.settings.SettingSpecifierProvider;\nimport net.solarnetwork.settings.SettingsChangeObserver;\nimport net.solarnetwork.settings.support.BasicTextFieldSettingSpecifier;\n\n/**\n * My super-duper service.\n *\n * @author matt\n * @version 1.0\n */\npublic class MyService extends BaseIdentifiable\nimplements SettingsChangeObserver, SettingSpecifierProvider {\n\nprivate int level;\n\n@Override\npublic String getSettingUid() {\nreturn \"com.example.MyService\"; // (1)!\n}\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nreturn Collections.singletonList(\nnew BasicTextFieldSettingSpecifier(\"level\", String.valueOf(0)));\n}\n\n@Override\npublic void configurationChanged(Map<String, Object> properties) {\n// the settings have changed; do something\n}\n\npublic int getLevel() {\nreturn level;\n}\n\npublic void setLevel(int level) {\nthis.level = level;\n}\n\n}\n
The setting UID will be the Configuration Admin PID
title = Super-duper Service\ndesc = This service does it all.\n\nlevel.key = Level\nlevel.desc = This one goes to 11.\n
You nest the <osgi:managed-properties> element within the actual service <bean> element you want to apply the managed settings on.
note how the persistent-id attribute value matches the getSettingsUid() value in MyService.java
the autowire-on-update method toggles having the Managed Properties automatically applied by Gemini Blueprint; you can set to false and provide an update-method if you want to handle changes yourself
the update-method attribute is optional; it provides a way for the service to be notified after the Configuration Admin settings have been applied.
When this plugin is deployed in SolarNode, the component will appear on the main Settings page and offer a configurable Level setting, like this:
"},{"location":"developers/osgi/blueprint-compendium/#managed-service-factory","title":"Managed Service Factory","text":"
The Managed Service Factory service provide a way to use the Configuration Admin service to manage multiple copies of a user-configurable service's properties. Conceptually it is like linking a class to a set of dynamic runtime Settings, but you can create as many independent copies as you like. Configuration Admin provides change event and persistence APIs for the settings, and the Managed Service Factory applies those settings to each linked service instance.
Imagine you have a service class ManagedService with a configurable property level. We can make that property a factory of managed, persistable settings by adding a <osgix:managed-service-factory> element to our Blueprint XML, like this:
MyService classMyService localizationBlueprint XML
package com.example;\n\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.Map;\nimport net.solarnetwork.node.service.support.BaseIdentifiable;\nimport net.solarnetwork.settings.SettingSpecifier;\nimport net.solarnetwork.settings.SettingSpecifierProvider;\nimport net.solarnetwork.settings.SettingsChangeObserver;\nimport net.solarnetwork.settings.support.BasicTextFieldSettingSpecifier;\n\n/**\n * My super-duper managed service.\n *\n * @author matt\n * @version 1.0\n */\npublic class ManagedService extends BaseIdentifiable\nimplements SettingsChangeObserver, SettingSpecifierProvider {\n\nprivate int level;\n\n@Override\npublic String getSettingUid() {\nreturn \"com.example.ManagedService\"; // (1)!\n}\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nreturn Collections.singletonList(\nnew BasicTextFieldSettingSpecifier(\"level\", String.valueOf(0)));\n}\n\n@Override\npublic void configurationChanged(Map<String, Object> properties) {\n// the settings have changed; do something\n}\n\npublic int getLevel() {\nreturn level;\n}\n\npublic void setLevel(int level) {\nthis.level = level;\n}\n\n}\n
The setting UID will be the Configuration Admin factory PID
title = Super-duper Managed Service\ndesc = This managed service does it all.\n\nlevel.key = Level\nlevel.desc = This one goes to 11.\n
The SettingSpecifierProviderFactory service is what makes the managed service factory appear as a component in the SolarNode Settings UI.
The factoryUid defines the Configuration Admin factory PID and the Settings UID.
You add a <osgix:managed-service-factory> element in your Blueprint XML, with a nested <bean> \"template\" within it. The template bean will be instantiated for each service instance instantiated by the Managed Service Factory.
note how the factory-pid attribute value matches the getSettingsUid() value in ManagedService.java and the factoryUid declared in #2.
the autowire-on-update method toggles having the Managed Properties automatically applied by Gemini Blueprint; you can set to false and provide an update-method if you want to handle changes yourself
the update-method attribute is optional; it provides a way for the service to be notified after the Configuration Admin settings have been applied.
When this plugin is deployed in SolarNode, the managed component will appear on the main Settings page like this:
After clicking on the Manage button next to this component, the Settings UI allows you to create any number of instances of the component, each with their own setting values. Here is a screen shot showing two instances having been created:
SolarNode supports the OSGi Blueprint Container Specification so plugins can declare their service dependencies and register their services by way of an XML file deployed with the plugin. If you are familiar with the Spring Framework's XML configuration, you will find Blueprint very similar. SolarNode uses the Eclipse Gemini implementation of the Blueprint specification, which is directly derived from Spring Framework.
Note
This guide will not document the full Blueprint XML syntax. Rather, it will attempt to showcase the most common parts used in SolarNode. Refer to the Blueprint Container Specification for full details of the specification.
Imagine you are working on a plugin and have a com.example.Greeter interface you would like to register as a service for other plugins to use, and an implementation of that service in com.example.HelloGreeter that relies on the Placeholder Service provided by SolarNode:
Greeter serviceHelloGreeter implementation
package com.example;\npublic interface Greeter {\n\n/**\n * Greet something with a given name.\n * @param name the name to greet\n * @return the greeting\n */\nString greet(String name);\n\n}\n
package com.example;\nimport net.solarnetwork.node.service.PlaceholderService;\npublic class HelloGreeter implements Greeter {\n\nprivate final PlaceholderService placeholderService;\n\npublic HelloGreeter(PlaceholderService placeholderService) {\nsuper();\nthis.placeholderService = placeholderService;\n}\n\n@Override\npublic String greet(String name) {\nreturn placeholderService.resolvePlaceholders(\nString.format(\"Hello %s, from {myName}.\", name),\nnull);\n}\n}\n
Assuming the PlaceholderService will resolve {name} to Office Node, we would expect the greet() method to run like this:
Greeter greeter = resolveGreeterService();\nString result = greeter.greet(\"Joe\");\n// result is \"Hello Joe, from Office Node.\"\n
In the plugin we then need to:
Obtain a net.solarnetwork.node.service.PlaceholderService to pass to the HelloGreeter(PlaceholderService) constructor
Register the HelloGreeter comopnent as a com.example.Greeter service in the SolarNode platform
Here is an example Blueprint XML document that does both:
Blueprint XML example
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\nxmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\nxsi:schemaLocation=\"\n http://www.osgi.org/xmlns/blueprint/v1.0.0\n https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\">\n\n<!-- Declare a reference (lookup) to the PlaceholderService -->\n<reference id=\"placeholderService\"\ninterface=\"net.solarnetwork.node.service.PlaceholderService\"/>\n\n<service interface=\"com.example.Greeter\">\n<bean class=\"com.example.HelloGreeter\">\n<argument ref=\"placeholderService\">\n</bean>\n</service>\n\n</blueprint>\n
"},{"location":"developers/osgi/blueprint/#blueprint-xml-resources","title":"Blueprint XML Resources","text":"
Blueprint XML documents are added to a plugin's OSGI-INF/blueprint classpath location. A plugin can provide any number of Blueprint XML documents there, but often a single file is sufficient and a common convention in SolarNode is to name it module.xml.
To make use of services registered by SolarNode plugins, you declare a reference to that service so you may refer to it elsewhere within the Blueprint XML. For example, imagine you wanted to use the Placeholder Service in your component. You would obtain a reference to that like this:
The id attribute allows you to refer to this service elsewhere in your Blueprint XML, while interface declares the fully-qualified Java interface of the service you want to use.
Components in Blueprint are Java classes you would like instantiated when your plugin starts. They are declared using a <bean> element in Blueprint XML. You can assign each component a unique identifier using an id attribute, and then you can refer to that component in other components.
Imagine an example component class com.example.MyComponent:
package com.example;\n\nimport net.solarnetwork.node.service.PlaceholderService;\n\npublic class MyComponent {\n\nprivate final PlaceholderService placeholderService;\nprivate int minimum;\n\npublic MyComponent(PlaceholderService placeholderService) {\nsuper();\nthis.placeholderService = placeholderService;\n}\n\npublic String go() {\nreturn PlaceholderService.resolvePlaceholders(placeholderService,\n\"{building}/temp\", null);\n}\n\npublic int getMinimum() {\nreturn minimum;\n}\n\npublic void setMinimum(int minimum) {\nthis.minimum = minimum;\n}\n}\n
Here is how that component could be declared in Blueprint:
If your component requires any constructor arguments, they can be specified with nested <argument> elements in Blueprint. The <argument> value can be specified as a reference to another component using a ref attribute whose value is the id of that component, or as a literal value using a value attribute.
You can configure mutable class properties on a component with nested <property name=\"\"> elements in Blueprint. A mutable property is a Java setter method. For example an int property minimum would be associated with a Java setter method public void setMinimum(int value).
The <property> value can be specified as a reference to another component using a ref attribute whose value is the id of that component, or as a literal value using a value attribute.
Blueprint can invoke a method on your component when it has finished instantiating and configuring the object (when the plugin starts), and another when it destroys the instance (when the plugin is stopped). You simply provide the name of the method you would like Blueprint to call in the init-method and destroy-method attributes of the <bean> element. For example:
You can make any component available to other plugins by registering the component with a <service> element that declares what interface(s) your component provides. Once registered, other plugins can make use of your component, for example by declaring a <referenece> to your component class in their Blueprint XML.
Note
You can only register Java interfaces as services, not classes.
For example, imagine a com.example.Startable interface like this:
We can register MyComponent as a Startable service using a <service> element like this in Blueprint:
Direct service componentIndirect service component
<service interface=\"com.example.Startable\">\n<!-- The service implementation is nested directly within -->\n<bean class=\"com.example.MyComponent\"/>\n</service>\n
<!-- The service implementation is referenced indirectly... -->\n<service ref=\"myComponent\" interface=\"com.example.Startable\"/>\n\n<!-- ... to a bean with a matching id attribute -->\n<bean id=\"myComponent\" class=\"com.example.MyComponent\"/>\n
"},{"location":"developers/osgi/blueprint/#multiple-service-interfaces","title":"Multiple Service Interfaces","text":"
You can advertise any number of service interfaces that your component supports, by nesting an <interfaces> element within the <service> element, in place of the interface attribute. For example:
"},{"location":"developers/osgi/blueprint/#export-service-packages","title":"Export service packages","text":"
For a registered service to be of any use to another plugin, the package the service is defined in must be exported by the plugin hosting that package. That is because the plugin wishing to add a reference to the service will need to import the package in order to use it.
For example, the plugin that hosts the com.example.service.MyService service would need a manifest file that includes an Export-Package attribute similar to:
Plugins in SolarNode can be added to and removed from the platform at any time without restarting the SolarNode process, because of the Life Cycle process OSGi manages. The life cycle of a plugin consists of a set of states and OSGi will transition a plugin's state over the course of the plugin's life.
The available plugin states are:
State Description INSTALLED The plugin has been successfully added to the OSGi framework. RESOLVED All package dependencies that the bundle needs are available. This state indicates that the plugin is either ready to be started or has stopped. STARTING The plugin is being started by the OSGi framework, but it has not finished starting yet. ACTIVE The plugin has been successfully started and is running. STOPPING The plugin is being stopped by the OSGi framework, but it has not finished stopping yet. UNINSTALLED The plugin has been removed by the OSGi framework. It cannot change to another state.
The possible changes in state can be visualized in the following state-change diagram:
A plugin can opt in to receiving callbacks for the start/stop state transitions by providing an org.osgi.framework.BundleActivator implementation and declaring that class in the Bundle-Activator manifest attribute. This can be useful when a plugin needs to initialize some resources when the plugin is started, and then release those resources when the plugin is stopped.
BundleActivator APIBundleActivator implementation exampleManifest declaration example
public interface BundleActivator {\n/**\n * Called when this bundle is started so the Framework can perform the\n * bundle-specific activities necessary to start this bundle.\n *\n * @param context The execution context of the bundle being started.\n */\npublic void start(BundleContext context) throws Exception;\n\n/**\n * Called when this bundle is stopped so the Framework can perform the\n * bundle-specific activities necessary to stop the bundle.\n *\n * @param context The execution context of the bundle being stopped.\n */\npublic void stop(BundleContext context) throws Exception;\n}\n
As SolarNode plugins are OSGi bundles, which are Java JAR files, every plugin automatically includes a META-INF/MANIFEST.MF file as defined in the Java JAR File Specification. The MANIFEST.MF file is where OSGi metadata is included, turning the JAR into an OSGi bundle (plugin).
In OSGi plugins are always versioned and and Java packages may be versioned. Versions follow Semantic Versioning rules, generally using this syntax:
major.minor.patch\n
In the manifest example you can see the plugin version 3.0.0 declared in the Bundle-Version attribute:
Bundle-Version: 3.0.0\n
The example also declares (exports) a net.solarnetwork.common.jdt package for other plugins to import (use) as version 2.0.0, in the Export-Package attribute:
The example also uses (imports) a versioned package net.solarnetwork.service using a version range greater than or equal to 1.0 and less than 2.0 and an unversioned package org.eclipse.jdt.core.compiler, in the Import-Package attribute:
Some OSGi version attributes allow version ranges to be declared, such as the Import-Package attribute. A version range is a comma-delimited lower,upper specifier. Square brackets are used to represent inclusive values and round brackets represent exclusive values. A value can be omitted to reprsent an unbounded value. Here are some examples:
Range Logic Description [1.0,2.0) 1.0.0 \u2264 x < 2.0.0 Greater than or equal to 1.0.0 and less than 2.0.0(1,3) 1.0.0 < x < 3.0.0 Greater than 1.0.0 and less than 3.0.0[1.3.2,) 1.3.2 \u2264 x Greater than or eequal to 1.3.21.3.2 1.3.2 \u2264 x Greater than or eequal to 1.3.2 (shorthand notation)
Implied unbounded range
An inclusive lower, unbounded upper range can be specifeid using a shorthand notation of just the lower bound, like 1.3.2.
Each plugin must provide the following attributes:
Attribute Example Description Bundle-ManifestVersion 2 declares the OSGi bundle manifest version; always 2Bundle-Name Awesome Data Source a concise human-readable name for the plugin Bundle-SymbolicName com.example.awesome a machine-readable, universally unique identifier for the plugin Bundle-Version 1.0.0 the plugin version Bundle-RequiredExecutionEnvironment JavaSE-1.8 a required OSGi execution environment"},{"location":"developers/osgi/manifest/#recommended-attributes","title":"Recommended attributes","text":"
Each plugin is recommended to provide the following attributes:
Attribute Example Description Bundle-Description An awesome data source that collects awesome data. a longer human-readable description of the plugin Bundle-Vendor ACME Corp the name of the entity or organisation that authored the plugin"},{"location":"developers/osgi/manifest/#common-attributes","title":"Common attributes","text":"
Other common manifest attributes are:
Attribute Example Description Bundle-Activator com.example.awesome.Activator a fully-qualified Java class name that implements the org.osgi.framework.BundleActivator interface, to handle plugin lifecycle events; see Activator for more information Export-Package net.solarnetwork.common.jdt;version=\"2.0.0\" a package export list Import-Package net.solarnetwork.service;version=\"[1.0,2.0)\" a package dependency list"},{"location":"developers/osgi/manifest/#package-dependencies","title":"Package dependencies","text":"
A plugin must declare the Java packages it directly uses in a Import-Package attribute. This attribute accpets a comma-delimited list of package specifications that take the basic form of:
PACKAGE;version=\"VERSION\"\n
For example here is how the net.solarnetwork.service package, versioned between 1.0 and 2.0, would be declared:
Direct package use means your plugin has code that imports a class from a given package. Classes in an imported package may import other packages indirectly; you do not need to import those packages as well. For example if you have code like this:
Then you will need to import the net.solarnetwork.service package.
Note
The SolarNode platform automatically imports core Java packages like java.* so you do not need to declare those.
Also note that in some scenarios a package used by a class in an imported package becomes a direct dependency. For example when you extend a class from an imported package and that class imports other packages. Those other packages may become direct dependencies that you also need to import.
If you import a package in your plugin, any child packages that may exist are not imported as well. You must import every individual package you need to use in your plugin.
For example to use both net.solarnetwork.service and net.solarnetwork.service.support you would have an Import-Package attribute like this:
A plugin can export any package it provides, making the resources within that package available to other plugins to import and use. Declare exoprted packages with a Export-Package attribute. This attribute takes a comma-delimited list of versioned package specifications. Note that version ranges are not supported: you must declare the exact version of the package you are exporting. For example:
Exported packages should not be confused with services. Exported packages give other plugins access to the classes and any other resources within those packages, but do not provide services to the platform. You can use Blueprint to register services. Keep in mind that any service a plugin registers must exist within an exported package to be of any use.
The net.solarnetwork.node.backup.BackupManager API provides SolarNode with a modular backup system composed of Backup Services that provide storage for backup data and Backup Resource Providers that contribute data to be backed up and support restoring backed up data.
The Backup Manager coordinates the creation and restoration of backups, delegating most of its functionality to the active Backup Service. The active Backup Service can be controlled through configuration.
The Backup Manager also supports exporting and importing Backup Archives, which are just .zip archives using a defined folder structure to preserve all backup resources within a single backup.
This design of the SolarNode backup system makes it easy for SolarNode plugins to contribute resources to backups, without needing to know where or how the backup data is ultimately stored.
What goes in a Backup?
In SolarNode a Backup will contain all the critical settings that are unique to that node, such as:
The Backup Manager can be configured under the net.solarnetwork.node.backup.DefaultBackupManager configuration namespace:
Key Default Description backupRestoreDelaySeconds 15 A number of seconds to delay the attempt of restoring a backup, when a backup has been previously marked for restoration. This delay gives the platform time to boot up and register the backup resource providers and other services required to perform the restore. preferredBackupServiceKey net.solarnetwork.node.backup.FileSystemBackupService The key of the preferred (active) Backup Service to use."},{"location":"developers/services/backup-manager/#backup","title":"Backup","text":"
The net.solarnetwork.node.backup.Backup API defines a unique backup, created by a Backup Service. Backups are uniquely identified with a unique key assigned by the Backup Service that creates them.
A Backup does not itself provide access to any of the resources associated with the backup. Instead, the getBackupResources() method of BackupService returns them.
The Backup Manager supports exporting and importing specially formed .zip archives that contain a complete Backup. These archives are a convenient way to transfer settings from one node to another, and can be used to restore SolarNode on a new device.
The net.solarnetwork.node.backup.BackupResource API defines a unique item within a Backup. A Backup Resource could be a file, a database table, or anything that can be serialized to a stream of bytes. Backup Resources are both provided by, and restored with, Backup Resource Providers so it is up to the Provider implementation to know how to generate and then restore the Resources it manages.
The net.solarnetwork.node.backup.BackupResourceProvider API defines a service that can both generate and restore Backup Resources. Each implementation is identified by a unique key, typically the fully-qualified Java class name of the implementation.
When a Backup is created, all Backup Resource Provider services registered in SolarNode will be asked to contribute their Backup Resources, using the getBackupResources() method.
When a Backup is restored, Backup Resources will be passed to their associated Provider with the restoreBackupResource(BackupResource) method.
The net.solarnetwork.node.backup.BackupService API defines the bulk of the SolarNode backup system. Each implementation is identified by a unique key, typically the fully-qualified Java class name of the implementation.
To create a Backup, use the performBackup(Iterable<BackupResource>) method, passing in the collection of Backup Resources to include.
To list the available Backups, use the getAvailableBackups(Backup) method.
To view a single Backup, use the backupForKey(String) method.
To list the resources in a Backup, use the getBackupResources(Backup)method.
SolarNode provides the net.solarnetwork.node.backup.FileSystemBackupService default Backup Service implementation that saves Backup Archives to the node's own file system.
The net.solarnetwork.node.backup.s3 plugin provides the net.solarnetwork.node.backup.s3.S3BackupService Backup Service implementation that saves all Backup data to AWS S3.
A plugin can publish a net.solarnetwork.service.CloseableService and SolarNode will invoke the closeService() method on it when that service is destroyed. This can be useful in some situations, to make sure resources are freed when a service is no longer needed.
Blueprint does provide the destroy-method stop hook that can be used in many situations, however Blueprint does not allow this in all cases. For example a <bean> nested within a <service> element does not allow a destroy-method:
<service interface=\"com.example.MyService\">\n<!-- destroy-method not allowed here: -->\n<bean class=\"com.example.MyComponent\"/>\n</service>\n
If MyComponent also implemented CloseableService then we can achieve the desired stop hook like this:
Note that the above example CloseableService is not strictly needed, as the same effect could be acheived by un-nesting the <bean> from the <service> element, like this:
There are situations where un-nesting is not possible, which is where CloseableService can be helpful.
"},{"location":"developers/services/datum-data-source-poll-job/","title":"Datum Data Source Poll Job","text":"
The DatumDataSourcePollManagedJob class is a Job Service implementation that can be used to let users schedule the generation of datum from a Datum Data Source. Typically this is configured as a Managed Service Factory so users can configure any number of job instances, each with their own settings.
Here is a typical example of a DatumDataSourcePollManagedJob, in a fictional MyDatumDataSource:
MyDatumDataSource.javaMyDatumDataSource.properties LocalizationBlueprint XML
title = Super-duper Datum Data Source\ndesc = This managed datum data source does it all.\n\nschedule.key = Schedule\nschedule.desc = The schedule to execute the job at. \\\nCan be either a number representing a frequency in <b>milliseconds</b> \\\nor a <a href=\"{0}\">cron expression</a>, for example <code>0 * * * * *</code>.\n\nsourceId.key = Source ID\nsourceId.desc = The source ID to use.\n\nlevel.key = Level\nlevel.desc = This one goes to 11.\n
The factoryUid is the same value as the getSettingsUid() value in MyDatumDataSource.java
Hiding down here is our actual data source!
Adding a service provider configuration is optional, but registers our data source as an OSGi service, in addition to the ManagedJob that the Managed Service Factory registers.
When this plugin is deployed in SolarNode, the managed component will appear on the main Settings page and then the component settings UI will look like this:
"},{"location":"developers/services/datum-data-source/","title":"Datum Data Source","text":"
The DatumDataSource API defines the primary way for plugins to generate datum instances from devices or services integrated with SolarNode, through a request-based API. The MultiDatumDataSource API is closely related, and allows a plugin to generate multiple datum when requested.
DatumDataSourceMultiDatumDataSource
package net.solarnetwork.node.service;\n\nimport net.solarnetwork.node.domain.datum.NodeDatum;\nimport net.solarnetwork.service.Identifiable;\n\n/**\n * API for collecting {@link NodeDatum} objects from some device.\n */\npublic interface DatumDataSource extends Identifiable, DeviceInfoProvider {\n\n/**\n * Get the class supported by this DataSource.\n *\n * @return class\n */\nClass<? extends NodeDatum> getDatumType();\n\n/**\n * Read the current value from the data source, returning as an unpersisted\n * {@link NodeDatum} object.\n *\n * @return Datum\n */\nNodeDatum readCurrentDatum();\n\n}\n
package net.solarnetwork.node.service;\n\nimport java.util.Collection;\nimport net.solarnetwork.node.domain.datum.NodeDatum;\nimport net.solarnetwork.service.Identifiable;\n\n/**\n * API for collecting multiple {@link NodeDatum} objects from some device.\n */\npublic interface MultiDatumDataSource extends Identifiable, DeviceInfoProvider {\n\n/**\n * Get the class supported by this DataSource.\n *\n * @return class\n */\nClass<? extends NodeDatum> getMultiDatumType();\n\n/**\n * Read multiple values from the data source, returning as a collection of\n * unpersisted {@link NodeDatum} objects.\n *\n * @return Datum\n */\nCollection<NodeDatum> readMultipleDatum();\n\n}\n
The Datum Data Source Poll Job provides a way to let users schedule the polling for datum from a data source.
SolarNode has a DatumQueue service that acts as a central facility for processing all NodeDatum captured by all data source plugins deployed in the SolarNode runtime. The queue can be configured with various filters that can augment, modify, or discard the datum. The queue buffers the datum for a short amount of time and then processes them sequentially in order of time, oldest to newest.
Datum data sources that use the Datum Data Source Poll Job are polled for datum on a recurring schedule and those datum are then posted to and stored in SolarNetwork. Data sources can also offer datum directly to the DatumQueue if they emit datum based on external events. When offering datum directly, the datum can be tagged as transient and they will then still be processed by the queue but will not be posted/stored in SolarNetwork.
/**\n * Offer a new datum to the queue, optionally persisting.\n *\n * @param datum\n * the datum to offer\n * @param persist\n * {@literal true} to persist, or {@literal false} to only pass to\n * consumers\n * @return {@literal true} if the datum was accepted\n */\nboolean offer(NodeDatum datum, boolean persist);\n
Plugins can also register observers on the DatumQueue that are notified of each datum that gets processed. The addConsumer() and removeConsumer() methods allow you to register/deregister observers:
/**\n * Register a consumer to receive processed datum.\n *\n * @param consumer\n * the consumer to register\n */\nvoid addConsumer(Consumer<NodeDatum> consumer);\n\n/**\n * De-register a previously registered consumer.\n *\n * @param consumer\n * the consumer to remove\n */\nvoid removeConsumer(Consumer<NodeDatum> consumer);\n
Each observer will receive all datum, including transient datum. An example plugin that makes use of this feature is the SolarFlux Upload Service, which posts a copy of each datum to a MQTT server.
Here is a screen shot of the datum queue settings available in the SolarNode UI:
SolarNode provides a ManagedJobScheduler service that can automatically execute jobs exported by plugins that have user-defined schedules.
The Job Scheduler uses the Task Scheduler
The Job Scheduler service uses the Task Scheduler internally, which means the number of jobs that can execute simultaneously will be limited by its thread pool configuration.
Any plugin simply needs to register a ManagedJob service for the Job Scheduler to automatically schedule and execute the job. The schedule is provided by the getSchedle() method, which can return a cron expression or a plain number representing a millisecond period.
The net.solarnetwork.node.job.SimpleManagedJob class implements ManagedJob and can be used in most situations. It delegates the actual work to a net.solarnetwork.node.job.JobService API, discussed in the next section.
Let's imagine you have a com.example.Job class that you would like to allow users to schedule. Your class would implement the JobService interface, and then you would provide a localized messages properties file and configure the service using OSGi Blueprint.
package com.example;\n\nimport java.util.Collections;\nimport java.util.List;\nimport net.solarnetwork.node.job.JobService;\nimport net.solarnetwork.node.service.support.BaseIdentifiable;\nimport net.solarnetwork.settings.SettingSpecifier;\n\n/**\n * My super-duper job.\n */\npublic class Job exetnds BaseIdentifiable implements JobService {\n@Override\npublic String getSettingUid() {\nreturn \"com.example.job\"; // (1)!\n}\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nreturn Collections.emptyList(); // (2)!\n}\n\n@Override\npublic void executeJobService() throws Exception {\n// do great stuff here!\n}\n}\n
The setting UID will be configured in the Blueprint XML as well.
The SimpleManagedJob class we'll configure in Blueprint XML will automatically add a schedule setting to configure the job schedule.
title = Super-duper Job\ndesc = This job does it all.\n\nschedule.key = Schedule\nschedule.desc = The schedule to execute the job at. \\\nCan be either a number representing a frequency in <b>milliseconds</b> \\\nor a <a href=\"{0}\">cron expression</a>, for example <code>0 * * * * *</code>.\n
The Placeholder Service API provides components a way to resolve variables in strings, known as placeholders, whose values are managed outside the component itself. For example a datum data source plugin could use the Placeholder Service to support resolving placeholders in a configurable Source ID property.
SolarNode provides a Placeholder Service implementation that resolves both dynamic placeholders from the Settings Database (using the setting namespace placeholder), and static placeholders from a configurable file or directory location.
Call the resolvePlaceholders(s, parameters) method to resolve all placeholders on the String s. The parameters argument can be used to provide additional placeholder values, or you can pass just pass null to rely solely on the placeholders available in the service already.
Here is an imaginary class that is constructed with an optional PlaceholderService, and then when the go() method is called uses that to resolve placeholders in the string {building}/temp and return the result:
package com.example;\n\nimport net.solarnetwork.node.service.PlaceholderService;\nimport net.solarnetwork.service.OptionalService;\n\npublic class MyComponent {\n\nprivate final OptionalService<PlaceholderService> placeholderService;\n\npublic MyComponent(OptionalService<PlaceholderService> placeholderService) {\nsuper();\nthis.placeholderService = placeholderService;\n}\n\npublic String go() {\nreturn PlaceholderService.resolvePlaceholders(placeholderService,\n\"{building}/temp\", null);\n}\n}\n
To use the Placeholder Service in your component, add either an Optional Service or explicit reference to your plugin's Blueprint XML file like this (depending on what your plugin requires):
The Placeholder Service supports the following configuration properties in the net.solarnetwork.node.core namespace:
Property Default Description placeholders.dir ${CONF_DIR}/placeholders.d Path to a single propertites file or to a directory of properties files to load as static placeholder parameter values when SolarNode starts up."},{"location":"developers/services/settings-db/","title":"Settings Database","text":""},{"location":"developers/services/settings-service/","title":"Settings Service","text":"
The SolarNode runtime provides a local SQL database that is used to hold application settings, data sampled from devices, or anything really. Some data is designed to live only in this local store (such as settings) while other data eventually gets pushed up into the SolarNet cloud. This document describes the most common aspects of the local database.
The database is provided by either the H2 or Apache Derby embedded SQL database engine.
Note
In SolarNodeOS the solarnode-app-db-h2 and solarnode-app-db-derby packages provide the H2 and Derby database implementations. Most modern SolarNode deployments use H2.
Typically the database is configured to run entirely within RAM on devices that support it, and the RAM copy is periodically synced to non-volatile media so if the device restarts the persisted copy of the database can be loaded back into RAM. This pattern works well because:
Non-volatile media access can be slow (e.g. flash memory)
Non-volatile media can wear out over time from many writes (e.g. flash memory)
Aside from settings, which change infrequently, most data stays locally only a short time before getting pushed into the SolarNet cloud.
A standard JDBC stack is available and normal SQL queries are used to access the database. The Hikari JDBC connection pool provides a javax.sql.DataSource for direct JDBC access. The pool is configured by factory configuration files in the net.solarnetwork.jdbc.pool.hikari namespace. See the net.solarnetwork.jdbc.pool.hikari-solarnode.cfg as an example.
To make use of the DataSource from a plugin using OSGi Blueprint you can declare a reference like this:
To support asynchronous task execution, SolarNode makes several thread-pool based services available to plugins:
A java.util.concurrent.Executor service for standard Runnable task execution
A Spring TaskExecutor service for Runnable task execution
A Spring AsyncTaskExecutor service for both Runnable and Callable task execution
A Spring AsyncListenableTaskExecutor service for both Runnable and Callable task execution that supports the org.springframework.util.concurrent.ListenableFuture API
Need to schedule tasks?
See the Task Scheduler page for information on scheduling simple tasks, or the Job Scheduler page for information on scheduling managed jobs.
To make use of any of these services from a plugin using OSGi Blueprint you can declare a reference to them like this:
"},{"location":"developers/services/task-executor/#thread-pool-configuration","title":"Thread pool configuration","text":"
This thread pool is configured as a fixed-size pool with the number of threads set to the number of CPU cores detected at runtime, plus one. For example on a Raspberry Pi 4 there are 4 CPU cores so the thread pool would be configured with 5 threads.
The Task Scheduler supports the following configuration properties in the net.solarnetwork.node.core namespace:
Property Default Description jobScheduler.poolSize 10 The number of threads to maintain in the job scheduler, and thus the maximum number of jobs that can run simultaneously. Must be set to 1 or higher. scheduler.startupDelay 180 A delay in seconds after creating the job scheduler to start triggering jobs. This can be useful to give the application time to completely initialize before starting to run jobs.
For example, to change the thread pool size to 20 and shorten the startup delay to 30 seconds, create an /etc/solarnode/services/net.solarnetwork.node.core.cfg file with the following content:
SolarNode provides a way for plugin components to describe their user-configurable properties, called settings, to the platform. SolarNode provides a web-based UI that makes it easy for users to configure those components using a web browser. For example, here is a screen shot of the SolarNode UI showing a form for the settings of a Database Backup component:
The mechanism for components to describe themselves in this way is called the Settings API. Classes that wish to participate in this system publish metadata about their configurable properties through the Settings Provider API, and then SolarNode generates a UI form based on that metadata. Each form field in the previous example image is a Setting Specifier.
The process is similar to the built-in Settings app on iOS: iOS applications can publish configurable property definitions and the Settings app displays a UI that allows users to modify those properties.
The net.solarnetwork.settings.SettingSpecifierProvider interface defines the way a class can declare themselves as a user-configurable component. The main elements of this API are:
public interface SettingSpecifierProvider {\n\n/**\n * Get a unique, application-wide setting ID.\n *\n * @return unique ID\n */\nString getSettingUid();\n\n/**\n * Get a non-localized display name.\n *\n * @return non-localized display name\n */\nString getDisplayName();\n\n/**\n * Get a list of {@link SettingSpecifier} instances.\n *\n * @return list of {@link SettingSpecifier}\n */\nList<SettingSpecifier> getSettingSpecifiers();\n\n}\n
The getSettingUid() method defines a unique ID for the configurable component. By convention the class or package name of the component (or a derivative of it) is used as the ID.
The getSettingSpecifiers() method returns a list of all the configurable properties of the component, as a list of Setting Specifier instances.
@Override\nprivate String username;\n\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>(1);\n\n// expose a \"username\" setting with a default value of \"admin\"\nresults.add(new BasicTextFieldSettingSpecifier(\"username\", \"admin\"));\n\nreturn results;\n}\n\n// settings are updated at runtime via standard setter methods\npublic void setUsername(String username) {\nthis.username = username;\n}\n
Setting values are treated as strings within the Settings API, but the methods associated with settings can accept any primitive or standard number type like int or Integer as well.
BigDecimal setting example
@Override\nprivate BigDecimal num;\n\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>(1);\n\nresults.add(new BasicTextFieldSettingSpecifier(\"num\", null));\n\nreturn results;\n}\n\n// settings will be coerced from strings into basic types automatically\npublic void setNum(BigDecimal num) {\nthis.num = num;\n}\n
Sometimes you might like to expose a simple string setting but internally treat the string as a more complex type. For example a Map could be configured using a simple delimited string like key1 = val1, key2 = val2. For situations like this you can publish a proxy setting that manages a complex data type as a string, and en/decode the complex type in your component accessor methods.
Delimited string to Map setting example
@Override\nprivate Map<String, String> map;\n\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>(1);\n\n// expose a \"mapping\" proxy setting for the map field\nresults.add(new BasicTextFieldSettingSpecifier(\"mapping\", null));\n\nreturn results;\n}\n\npublic void setMapping(String mapping) {\nthis.map = StringUtils.commaDelimitedStringToMap(mapping);\n}\n
The net.solarnetwork.node.settings.SettingResourceHandler API defines a way for a component to import and export files uploaded to SolarNode from external sources.
A component could support importing a file using the File setting. This could be used, to provide a way of configuring the component from a configuration file, like CSV, JSON, XML, and so on. Similarly a component could support exporting a file, to generate a configuration file in another format like CSV, JSON, XML, and so on, from its current settings. For example, the Modbus Device Datum Source does exactly these things: importing and exporting a custom CSV file to make configuring the component easier.
The main part of the SettingResourceHandler API for importing files looks like this:
public interface SettingResourceHandler {\n\n/**\n * Get a unique, application-wide setting ID.\n *\n * <p>\n * This ID must be unique across all setting resource handlers registered\n * within the system. Generally the implementation will also be a\n * {@link net.solarnetwork.settings.SettingSpecifierProvider} for the same\n * ID.\n * </p>\n *\n * @return unique ID\n */\nString getSettingUid();\n\n/**\n * Apply settings for a specific key from a resource.\n *\n * @param settingKey\n * the setting key, generally a\n * {@link net.solarnetwork.settings.KeyedSettingSpecifier#getKey()}\n * value\n * @param resources\n * the resources with the settings to apply\n * @return any setting values that should be persisted as a result of\n * applying the given resources (never {@literal null}\n * @throws IOException\n * if any IO error occurs\n */\nSettingsUpdates applySettingResources(String settingKey, Iterable<Resource> resources)\nthrows IOException;\n
The getSettingUid() method overlaps with the Settings Provider API, and as the comments note it is typical for a Settings Provider that publishes settings like File or Text Area to also implement SettingResourceHandler.
The settingKey passed to the applySettingResources() method identifies the resource(s) being uploaded, as a single Setting Resource Handler might support multiple resources. For example a Settings Provider might publish multiple File settings, or File and Text Area settings. The settingKey is used to differentiate between each one.
Imagine a component that publishes a File setting. A typical implementation of that component would look like this (this example omits some methods for brevity):
public class MyComponent implements SettingSpecifierProvider,\nSettingResourceHandler {\n\nprivate static final Logger log\n= LoggerFactory.getLogger(MyComponent.class);\n\n/** The resource key to identify the File setting resource. */\npublic static final String RESOURCE_KEY_DOCUMENT = \"document\";\n\n@Override\npublic String getSettingUid() {\nreturn \"com.example.mycomponent\";\n}\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>();\n\n// publish a File setting tied to the RESOURCE_KEY_DOCUMENT key,\n// allowing only text files to be accepted\nresults.add(new BasicFileSettingSpecifier(RESOURCE_KEY_DOCUMENT, null,\nnew LinkedHashSet<>(asList(\".txt\", \"text/*\")), false));\n\nreturn results;\n}\n\n@Override\npublic SettingsUpdates applySettingResources(String settingKey,\nIterable<Resource> resources) throws IOException {\nif ( resources == null ) {\nreturn null;\n}\nif ( RESOURCE_KEY_DOCUMENT.equals(settingKey) ) {\nfor ( Resource r : resources ) {\n// here we would do something useful with the resource... like\n// read into a string and log it\nString s = FileCopyUtils.copyToString(new InputStreamReader(\nr.getInputStream(), StandardCharsets.UTF_8));\n\nlog.info(\"Got {} resource content: {}\", settingKey, s);\n\nbreak; // only accept one file\n}\n}\nreturn null;\n}\n\n}\n
The part of the Setting Resource Handler API that supports exporting setting resources looks like this:
/**\n * Get a list of supported setting keys for the\n * {@link #currentSettingResources(String)} method.\n *\n * @return the set of supported keys\n */\ndefault Collection<String> supportedCurrentResourceSettingKeys() {\nreturn Collections.emptyList();\n}\n\n/**\n * Get the current setting resources for a specific key.\n *\n * @param settingKey\n * the setting key, generally a\n * {@link net.solarnetwork.settings.KeyedSettingSpecifier#getKey()}\n * value\n * @return the resources, never {@literal null}\n */\nIterable<Resource> currentSettingResources(String settingKey);\n
The supportedCurrentResourceSettingKeys() method returns a set of resource keys the component supports for exporting. The currentSettingResources() method returns the resources to export for a given key.
The SolarNode UI shows a form menu with all the available resources for all components that support the SettingResourceHandler API, and lets the user to download them:
The net.solarnetwork.settings.SettingSpecifier API defines metadata for a single configurable property in the Settings API. The API looks like this:
public interface SettingSpecifier {\n\n/**\n * A unique identifier for the type of setting specifier this represents.\n *\n * <p>\n * Generally this will be a fully-qualified interface name.\n * </p>\n *\n * @return the type\n */\nString getType();\n\n/**\n * Localizable text to display with the setting's content.\n *\n * @return the title\n */\nString getTitle();\n\n}\n
This interface is very simple, and extended by more specialized interfaces that form more useful setting types.
Note
A SettingSpecifier instance is often referred to simply as a setting.
Here is a view of the class hierarchy that builds off of this interface:
Note
The SettingSpecifier API defines metadata about a configurable property, but not methods to view or change that property's value. The Settings Service provides methods for managing setting values.
The TextFieldSettingSpecifier defines a simple string-based configurable property and is the most common setting type. The setting defines a key that maps to a setter method on its associated component class. In the SolarNode UI a text field is rendered as an HTML form text input, like this:
The net.solarnetwork.settings.support.BasicTextFieldSettingSpecifier class provides the standard implementation of this API. A standard text field setting is created like this:
new BasicTextFieldSettingSpecifier(\"myProperty\", \"DEFAULT_VALUE\");\n\n// or without any default value\nnew BasicTextFieldSettingSpecifier(\"myProperty\", null);\n
Tip
Setting values are generally treated as strings within the Settings API, however other basic data types such as integers and numbers can be used as well. You can also publish a \"proxy\" setting that manages a complex data type as a string, and en/decode the complex type in your component accessor methods.
For example a Map<String, String> setting could be published as a text field setting that en/decodes the Map into a delimited string value, for example name=Test, color=red.
"},{"location":"developers/settings/specifier/#secure-text-field","title":"Secure Text Field","text":"
The BasicTextFieldSettingSpecifier can also be used for \"secure\" text fields where the field's content is obscured from view. In the SolarNode UI a secure text field is rendered as an HTML password form input like this:
A standard secure text field setting is created by passing a third true argument, like this:
new BasicTextFieldSettingSpecifier(\"myProperty\", \"DEFAULT_VALUE\", true);\n\n// or without any default value\nnew BasicTextFieldSettingSpecifier(\"myProperty\", null, true);\n
The TitleSettingSpecifier defines a simple read-only string-based configurable property. The setting defines a key that maps to a setter method on its associated component class. In the SolarNode UI the default value is rendered as plain text, like this:
The net.solarnetwork.settings.support.BasicTitleSettingSpecifier class provides the standard implementation of this API. A standard title setting is created like this:
new BasicTitleSettingSpecifier(\"status\", \"Status is good.\", true);\n
The TitleSettingSpecifier supports HTML markup. In the SolarNode UI the default value is rendered directly into HTML, like this:
// pass `true` as the 4th argument to enable HTML markup in the status value\nnew BasicTitleSettingSpecifier(\"status\", \"Status is <b>good</b>.\", true, true);\n
The TextAreaSettingSpecifier defines a simple string-based configurable property for a larger text value, loaded as an external file using the SettingResourceHandler API. In the SolarNode UI a text area is rendered as an HTML form text area with an associated button to upload the content, like this:
The net.solarnetwork.settings.support.BasicTextAreaSettingSpecifier class provides the standard implementation of this API. A standard text field setting is created like this:
new BasicTextAreaSettingSpecifier(\"myProperty\", \"DEFAULT_VALUE\");\n\n// or without any default value\nnew BasicTextAreaSettingSpecifier(\"myProperty\", null);\n
"},{"location":"developers/settings/specifier/#direct-text-area","title":"Direct Text Area","text":"
The BasicTextAreaSettingSpecifier can also be used for \"direct\" text areas where the field's content is not uploaded as an external file. In the SolarNode UI a direct text area is rendered as an HTML form text area, like this:
A standard direct text area setting is created by passing a third true argument, like this:
new BasicTextAreaSettingSpecifier(\"myProperty\", \"DEFAULT_VALUE\", true);\n\n// or without any default value\nnew BasicTextAreaSettingSpecifier(\"myProperty\", null, true);\n
The ToggleSettingSpecifier defines a boolean configurable property. In the SolarNode UI a toggle setting is rendered as an HTML form button, like this:
The net.solarnetwork.settings.support.BasicToggleSettingSpecifier class provides the standard implementation of this API. A standard toggle setting is created like this:
The SliderSettingSpecifier defines a number-based configuration property with minimum and maximum values enforced, and a step limit. In the SolarNode UI a slider is rendered as an HTML widget, like this:
The net.solarnetwork.settings.support.BasicSliderSettingSpecifier class provides the standard implementation of this API. A standard Slider setting is created like this:
// no default value, range between 0-11 in 0.5 increments\nnew BasicSliderSettingSpecifier(\"volume\", null, 0.0, 11.0, 0.5);\n\n// default value 5.0, range between 0-11 in 0.5 increments\nnew BasicSliderSettingSpecifier(\"volume\", 5.0, 0.0, 11.0, 0.5);\n
The RadioGroupSettingSpecifier defines a configurable property that accepts a single value from a fixed set of possible values. In the SolarNode UI a radio group is rendered as a set of HTML radio input form fields, like this:
The net.solarnetwork.settings.support.BasicRadioGroupSettingSpecifier class provides the standard implementation of this API. A standard RadioGroup setting is created like this:
String[] vals = new String[] {\"a\", \"b\", \"c\"};\nString[] labels = new Strign[] {\"One\", \"Two\", \"Three\"};\nMap<String, String> radioValues = new LinkedHashMap<>(3);\nfor ( int i = 0; i < vals.length; i++ ) {\nradioValues.put(vals[i], labels[i]);\n}\nBasicRadioGroupSettingSpecifier radio =\nnew BasicRadioGroupSettingSpecifier(\"option\", vals[0]);\nradio.setValueTitles(radioValues);\n
The MultiValueSettingSpecifier defines a configurable property that accepts a single value from a fixed set of possible values. In the SolarNode UI a multi-value setting is rendered as an HTML select form field, like this:
The net.solarnetwork.settings.support.BasicMultiValueSettingSpecifier class provides the standard implementation of this API. A standard MultiValue setting is created like this:
String[] vals = new String[] {\"a\", \"b\", \"c\"};\nString[] labels = new Strign[] {\"Option 1\", \"Option 2\", \"Option 3\"};\nMap<String, String> radioValues = new LinkedHashMap<>(3);\nfor ( int i = 0; i < vals.length; i++ ) {\nradioValues.put(vals[i], labels[i]);\n}\nBasicMultiValueSettingSpecifier menu = new BasicMultiValueSettingSpecifier(\"option\",\nvals[0]);\nmenu.setValueTitles(menuValues);\n
The FileSettingSpecifier defines a file-based resource property, loaded as an external file using the SettingResourceHandler API. In the SolarNode UI a file setting is rendered as an HTML file input, like this:
The net.solarnetwork.node.settings.support.BasicFileSettingSpecifier class provides the standard implementation of this API. A standard file setting is created like this:
// a single file only, no default content\nnew BasicFileSettingSpecifier(\"document\", null,\nnew LinkedHashSet<>(Arrays.asList(\".txt\", \"text/*\")), false);\n\n// multiple files allowed, no default content\nnew BasicFileSettingSpecifier(\"document-list\", null,\nnew LinkedHashSet<>(Arrays.asList(\".txt\", \"text/*\")), true);\n
A Dynamic List setting allows the user to manage a list of homogeneous items, adding or subtracting items as desired. The items can be literals like strings, or arbitrary objects that define their own settings. In the SolarNode UI a dynamic list setting is rendered as a pair of HTML buttons to remove and add items, like this:
A Dynamic List is often backed by a Java Collection or array in the associated component. In addition a special size-adjusting accessor method is required, named after the setter method with Count appended. SolarNode will use this accessor to request a specific size for the dynamic list.
Array-backed dynamic list accessorsList-backed dynamic list accessors
The SettingUtils.dynamicListSettingSpecifier() method simplifies the creation of a GroupSettingSpecifier that represents a dynamic list (the examples in the following sections demonstrate this).
A simple Dynamic List is a dynamic list of string or number values.
private String[] names = new String[0];\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>();\n\n// turn a list of strings into a Group of TextField settings\nGroupSettingSpecifier namesList = SettingUtils.dynamicListSettingSpecifier(\n\"names\", asList(names), (String value, int index, String key) ->\nsingletonList(new BasicTextFieldSettingSpecifier(key, null)));\nresults.add(namesList);\n\nreturn results;\n}\n
A complex Dynamic List is a dynamic list of arbitrary object values. The main difference in terms of the necessary settings structure required, compared to a Simple Dynamic List, is that a group-of-groups is used.
Complex data classDynamic List setting
public class Person {\nprivate String firstName;\nprivate String lastName;\n\n// generate list of settings for a Person, nested under some prefix\npublic List<SettingSpecifier> settings(String prefix) {\nList<SettingSpecifier> results = new ArrayList<>(2);\nresults.add(new BasicTextFieldSettingSpecifier(prefix + \"firstName\", null));\nresults.add(new BasicTextFieldSettingSpecifier(prefix + \"lastName\", null));\nreturn results;\n}\n\npublic void setFirstName(String firstName) {\nthis.firstName = firstName;\n}\n\npublic void setLastName(String lastName) {\nthis.lastName = lastName;\n}\n}\n
private Person[] people = new Person[0];\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>();\n\n// turn a list of People into a Group of Group settings\nGroupSettingSpecifier peopleList = SettingUtils.dynamicListSettingSpecifier(\n\"people\", asList(people), (Person value, int index, String key) ->\nsingletonList(new BasicGroupSettingSpecifier(\nvalue.settings(key + \".\"))));\nresults.add(peopleList);\n\nreturn results;\n}\n
Some SolarNode components can be configured from properties files. This type of configuration is meant to be changed just once, when a SolarNode is first deployed, to alter some default configuration value.
Not to be confused with Settings
This type of configuration differs from what the Settings page in the Setup App provides a UI managing. This configuration might be created by system administrators when creating a custom SolarNodeOS image for their needs, while Settings are meant to be managed by end users.
Configuration properties files are read from the /etc/solarnode/services directory and named like NAMESPACE.cfg , where NAMESPACE represents a configuration namespace.
Configuration location
The /etc/solarnode/services location is the default location in SolarNodeOS. It might be another location in other SolarNode deployments.
Imagine a component uses the configuration namespace com.example.service and supports a configurable property named max-threads that accepts an integer value you would like to configure as 4. You would create a com.example.service.cfg file like:
In SolarNetwork a datum is the fundamental time-stamped data structure collected by SolarNodes and stored in SolarNet. It is a collection of properties associated with a specific information source at a specific time.
Example plain language description of a datum
the temperature and humidity collected from my weather station at 1 Jan 2023 11:00 UTC
In this example datum description, we have all the comopnents of a datum:
Datum component Description node the (implied) node that collected the data properties temperature and humidity source my weather station time 1 Jan 2023 11:00 UTC
A datum stream is the collection of datum from a single node for a single source over time.
A datum object is modeled as a flexible structure with the following core elements:
Element Type Description nodeId number A unique ID assigned to nodes by SolarNetwork. sourceId string A node-unique identifier that defines a single stream of data from a specific source, up to 64 characters long. Certain characters are not allowed, see below. created date A time stamp of when the datum was collected, or the date the datum is associated with. samples datum samples The collected properties.
A datum is uniquely identified by the three combined properties (nodeId, sourceId, created).
Source IDs are user-defined strings used to distinguish between different information sources within a single node. For example, a node might collect data from an energy meter on source ID Meter and a solar inverter on Solar. SolarNetwork does not place any restrictions on source ID values, other than a 64-character limit. However, there is are some conventions used within SolarNetwork that are useful to follow, especially for larger deployment of nodes with many source IDs:
Keep IDs as short as, for example Meter1 is better than Schneider ION6200 Meter - Main Building.
Use a path-like structure to encode a logical hierarchy, in least specific to most specific order. For example /S1/B1/M1 could imply the first meter in the first building on the first site.
The + and # characters should not be used. This is actually a constraint in the MQTT protocol used in parts of SolarNetwork, where the MQTT topic name includes the source ID. These characters are MQTT topic filter wildcards, and cannot be used in topic names.
Avoid using wildcard special characters.
The path-like structure becomes useful in places where wildcard patterns are used, like security policies or datum queries. It is generally worthwhile spending some time planning on a source ID taxonomy to use when starting a new project with SolarNetwork.
The properties included in a datum object are known as datum samples. The samples are modeled as a collection of named properties, for example the temperature and humidity properties in the earlier example datum could be represented like this:
Example representation of datum samples from a weather station source
The datum samples are actually further organized into three classifications:
Classification Key Description instantaneous i a single reading, observation, or measurement that does not accumulate over time accumulating a a reading that accumulates over time, like a meter or odometer status s non-numeric data, like staus codes or error messages
These classifications help SolarNetwork understand how to aggregate the datum samples over time. When SolarNode uploads a datum to SolarNetwork, the sample will include the classification of each property. The previous example would thus more accurately be represented like this:
Example representation of datum samples with classifications
watts is an instantaneous measurement of power that does not accumulate
wattHours is an accumulating measurement of the accrual of energy over time
mode is a status message that is not a number
Note
Sometimes these classifications will be hidden from you. For example SolarNetwork hides them when returning datum data from some SolarNetwork API methods. You might come across them in some SolarNode plugins that allow configuring dynamic sample properties to collect, when SolarNode does not implicitly know which classification to use. Some SolarNetwork APIs do return or require fully classified sample objects; the documentation for those services will make that clear.
Many SolarNode components support a general \"expressions\" framework that can be used to calculate values using a scripting language. SolarNode comes with the Spel scripting language by default, so this guide describes that language.
A common use case for expressions is to derive datum property values out of the raw property values captured from a device. In the SolarNode Setup App a typical datum data source component might present a configurable list of expression settings like this:
In this example, each time the data source captures a datum from the device it is communicating with it will add a new watts property by multiplying the captured amps and volts property values. In essence the expression is like this code:
Many SolarNode expressions are evaluated in the context of a datum, typically one captured from a device SolarNode is collecting data from. In this context, the expression supports accessing datum properties directly as expression variables, and some helpful functions are provided.
All datum properties with simple names can be referred to directly as variables. Here simple just means a name that is also a legal variable name. The property classifications do not matter in this context: the expression will look for properties in all classifications.
A datum expression will also provide the following variables:
Property Type Description datumDatum A Datum object, in case you need direct access to the functions provided there. metaDatumMetadataOperations Get datum metadata for the current source ID. parametersMap<String,Object> Simple map-based access to all parameters passed to the expression. The available parameters depend on the context of the expression evaluation, but often include things like placeholder values or parameters generated by previously evaluated expressions. These values are also available directly as variables, this is rarely needed but can be helpful for accessing dynamically-calculated property names or properties with names that are not legal variable names. propsMap<String,Object> Simple map based access to all properties in datum. As datum properties are also available directly as variables, this is rarely needed but can be helpful for accessing dynamically-calculated property names or properties with names that are not legal variable names. sourceIdString The source ID of the current datum."},{"location":"users/expressions/#functions","title":"Functions","text":"
Some functions are provided to help with datum-related expressions.
The following functions help with bitwise integer manipulation operations:
Function Arguments Result Description and(n1,n2)Number, NumberNumber Bitwise and, i.e. (n1 & n2)andNot(n1,n2)Number, NumberNumber Bitwise and-not, i.e. (n1 & ~n2)narrow(n,s)Number, NumberNumber Return n as a reduced-size but equivalent number of a minimum power-of-two byte size snarrow8(n)NumberNumber Return n as a reduced-size but equivalent number narrowed to a minimum of 8-bits narrow16(n)NumberNumber Return n as a reduced-size but equivalent number narrowed to a minimum of 16-bits narrow32(n)NumberNumber Return n as a reduced-size but equivalent number narrowed to a minimum of 32-bits narrow64(n)NumberNumber Return n as a reduced-size but equivalent number narrowed to a minimum of 64-bits not(n)NumberNumber Bitwise not, i.e. (~n)or(n1,n2)Number, NumberNumber Bitwise or, i.e. (n1 | n2)shiftLeft(n,c)Number, NumberNumber Bitwise shift left, i.e. (n << c)shiftRight(n,c)Number, NumberNumber Bitwise shift left, i.e. (n >> c)testBit(n,i)Number, Numberboolean Test if bit i is set in integer n, i.e. ((n & (1 << i)) != 0)xor(n1,n2)Number, NumberNumber Bitwise xor, i.e. (n1 ^ n2)
Tip
All number arguments will be converted to BigInteger values for the bitwise operations, and BigInteger values are returned.
The following functions deal with datum streams. The latest() and offset() functions give you access to recently-captured datum from any SolarNode source, so you can refer to any datum stream being generated in SolarNode. They return another datum expression root object, which means you have access to all the variables and functions documented on this page with them as well.
Function Arguments Result Description hasLatest(source)Stringboolean Returns true if a datum with source ID source is available via the latest(source) function. hasLatestMatching(pattern)StringCollection<DatumExpressionRoot> Returns true if latestMatching(pattern) returns a non-empty collection. hasLatestOtherMatching(pattern)StringCollection<DatumExpressionRoot> Returns true if latestOthersMatching(pattern) returns a non-empty collection. hasMeta()boolean Returns true if metadata for the current source ID is available. hasMeta(source)Stringboolean Returns true if datumMeta(source) would return a non-null value. hasOffset(offset)intboolean Returns true if a datum is available via the offset(offset) function. hasOffset(source,offset)String, intboolean Returns true if a datum with source ID source is available via the offset(source,int) function. latest(source)StringDatumExpressionRoot Provides access to the latest available datum matching the given source ID, or null if not available. This is a shortcut for calling offset(source,0). latestMatching(pattern)StringCollection<DatumExpressionRoot> Return a collection of the latest available datum matching a given source ID wildcard pattern. latestOthersMatching(pattern)StringCollection<DatumExpressionRoot> Return a collection of the latest available datum matching a given source ID wildcard pattern, excluding the current datum if its source ID happens to match the pattern. meta(source)StringDatumMetadataOperations Get datum metadata for a specific source ID. metaMatching(pattern)StringCollection<DatumMetadataOperations> Find datum metadata for sources matching a given source ID wildcard pattern. offset(offset)intDatumExpressionRoot Provides access to a datum from the same stream as the current datum, offset by offset in time, or null if not available. Offset 1 means the datum just before this datum, and so on. offset(source,offset)String, intDatumExpressionRoot Provides access to an offset from the latest available datum matching the given source ID, or null if not available. Offset 0 represents the \"latest\" datum, 1 the one before that, and so on. SolarNode only maintains a limited history for each source, do do not rely on more than a few datum to be available via this method. This history is also cleared when SolarNode restarts. selfAndLatestMatching(pattern)StringCollection<DatumExpressionRoot> Return a collection of the latest available datum matching a given source ID wildcard pattern, including the current datum.\u00a0The current datum will always be the first datum returned."},{"location":"users/expressions/#math-functions","title":"Math functions","text":"
Expressions support basic math operators like + for addition and * for multiplication. The following functions help with other math operations:
Function Arguments Result Description avg(collection)Collection<Number>Number Calculate the average (mean) of a collection of numbers. Useful when combined with the group(pattern) function. ceil(n)NumberNumber Round a number larger, to the nearest integer. ceil(n,significance)Number, NumberNumber Round a number larger, to the nearest integer multiple of significance. down(n)NumberNumber Round numbers towards zero, to the nearest integer. down(n,significance)Number, NumberNumber Round numbers towards zero, to the nearest integer multiple of significance. floor(n)NumberNumber Round a number smaller, to the nearest integer. floor(n,significance)Number, NumberNumber Round a number smaller, to the nearest integer multiple of significance. max(collection)Collection<Number>Number Return the largest value from a set of numbers. max(n1,n2)Number, NumberNumber Return the larger of two numbers. min(collection)Collection<Number>Number Return the smallest value from a set of numbers. min(n1,n2)Number, NumberNumber Return the smaler of two numbers. mround(n,significance)Number, NumberNumber Round a number to the nearest integer multiple of significance. round(n)NumberNumber Round a number to the nearest integer. round(n,digits)Number, NumberNumber Round a number to the nearest number with digits decimal digits. roundDown(n,digits)Number, NumberNumber Round a number towards zero to the nearest number with digits decimal digits. roundUp(n,digits)Number, NumberNumber Round a number away from zero to the nearest number with digits decimal digits. sum(collection)Collection<Number>Number Calculate the sum of a collection of numbers. Useful when combined with the group(pattern) function. up(n)NumberNumber Round numbers away from zero, to the nearest integer. up(n,significance)Number, NumberNumber Round numbers away from zero, to the nearest integer multiple of significance."},{"location":"users/expressions/#node-metadata-functions","title":"Node metadata functions","text":"
All the Datum Metadata functions like metadataAtPath(path) can be invoked directly, operating on the node's own metadata instead of a datum stream's metadata.
The following functions deal with general SolarNode operations:
Function Arguments Result Description isOpMode(mode)Stringboolean Returns true if the mode operational mode is active."},{"location":"users/expressions/#property-functions","title":"Property functions","text":"
The following functions help with expression properties (variables):
Function Arguments Result Description has(name)Stringboolean Returns true if a property named name is defined. Can be used to prevent expression errors on datum property variables that are missing. group(pattern)StringCollection<Number> Creates a collection out of numbered properties whose name matches the given regular expression pattern."},{"location":"users/expressions/#expression-examples","title":"Expression examples","text":"
Let's assume a captured datum like this, expressed as JSON:
Then here are some example Spel expressions and the results they would produce:
Expression Result Comment stateOk Returns the state status property directly, which is Ok. datum.s['state']Ok Returns the state status property explicitly. props['state']Ok Same result as datum.s['state'] but using the short-cut props accessor. amps * volts1008.0 Returns the result of multiplying the amps and volts properties together: 4.2 \u00d7 240.0 = 1008.0."},{"location":"users/expressions/#datum-stream-history","title":"Datum stream history","text":"
Building on the previous example datum, let's assume an earlier datum for the same source ID had been collected with these properties (the classifications have been omitted for brevity):
Then here are some example expressions and the results they would produce given the original datum example:
Expression Result Comment hasOffset(1)true Returns true because of the earlier datum that is available. hasOffset(2)false Returns false because only one earlier datum is available. amps - offset(1).amps1.1 Computes the difference between the current and previous amps properties, which is 4.2 - 3.1 = 1.1."},{"location":"users/expressions/#other-datum-stream-history","title":"Other datum stream history","text":"
Other datum stream histories collected by SolarNode can also be accessed via the offset(source,offset) function. Let's assume SolarNode is collecting a datum stream for the source ID solar, and had amassed the following history, in newest-to-oldest order:
Then here are some example expressions and the results they would produce given the original datum example:
Expression Result Comment hasLatest('solar')true Returns true because of a datum for source solar is available. hasOffset('solar',2)false Returns false because only one earlier datum from the latest with source solar is available. (amps * volts) - (latest('solar').amps * latest('solar').volts)432.0 Computes the difference in energy between the latest solar datum and the current datum, which is (6.0 \u00d7 240.0) - (4.2 \u00d7 240.0) = 432.0.
If we add another datum stream for the source ID solar1 like this:
[\n{\"amps\" : 1.0, \"volts\" : 240.0 }\n]\n
If we also add another datum stream for the source ID solar2 like this:
[\n{\"amps\" : 3.0, \"volts\" : 240.0 }\n]\n
Then here are some example expressions and the results they would produce given the previous datum examples:
Expression Result Comment sum(latestMatching('solar*').?[amps>1].![amps * volts])2160 Returns the sum power of the latest solar and solar2 datum. The solar1 power is omitted because its amps property is not greater than 1, so we end up with (6 * 240) + (3 * 240) = 2160."},{"location":"users/expressions/#datum-metadata","title":"Datum metadata","text":"
Some functions return DatumMetadataOperations objects. These objects provide metadata for things like a specific source ID on SolarNode.
The properties available on datum metadata objects are:
Property Type Description emptyboolean Is true if the metadata does not contain any values. infoMap<String,Object> Simple map based access to the general metadata (e.g. the keys of the m metadata map). infoKeysSet<String> The set of general metadata keys available (e.g. the keys of the m metadata map). propertyInfoKeysSet<String> The set of property metadata keys available (e.g. the keys of the pm metadata map). tagsSet<String> A set of tags associated with the metadata."},{"location":"users/expressions/#datum-metadata-general-info-functions","title":"Datum metadata general info functions","text":"
The following functions available on datum metadata objects support access to the general metadata (e.g. the m metadata map):
Function Arguments Result Description getInfo(key)StringObject Get the general metadata value for a specific key. getInfoNumber(key)StringNumber Get a general metadata value for a specific key as a Number. Other more specific number value functions are also available such as getInfoInteger(key) or getInfoBigDecimal(key). getInfoString(key)StringString Get a general metadata value for a specific key as a String. hasInfo(key)Stringboolean Returns true if a non-null general metadata value exists for the given key."},{"location":"users/expressions/#datum-metadata-property-info-functions","title":"Datum metadata property info functions","text":"
The following functions available on datum metadata objects support access to the property metadata (e.g. the pm metadata map):
Function Arguments Result Description getPropertyInfo(prop)StringMap<String,Object> Get the property metadata for a specific property. getInfoNumber(prop,key)String, StringNumber Get a property metadata value for a specific property and key as a Number. Other more specific number value functions are also available such as getInfoInteger(prop,key) or getInfoBigDecimal(prop,key). getInfoString(prop,key)String, StringString Get a property metadata value for a specific property and key as a String. hasInfo(prop,key)String, StringString Returns true if a non-null property metadata value exists for the given property and key."},{"location":"users/expressions/#datum-metadata-global-functions","title":"Datum metadata global functions","text":"
The following functions available on datum metadata objects support access to both general and property metadata:
Function Arguments Result Description differsFrom(metadata)DatumMetadataOperationsboolean Returns true if the given metadata has any different values than the receiver. hasTag(tag)Stringboolean Returns true if the given tag is available. metadataAtPath(path)StringObject Get the metadata value at a metadata key path. hasMetadataAtPath(path)Stringboolean Returns true if metadataAtPath(path) would return a non-null value."},{"location":"users/getting-started/","title":"Getting Started","text":"
This section describes how to get SolarNode running on a device. You will need to configure your device as a SolarNode and associate your SolarNode with SolarNetwork.
Tip
You might find it helpful to read through this entire guide before jumping in. There are screen shots and tips provided to help you along the way.
"},{"location":"users/getting-started/#get-your-device-ready-to-use","title":"Get your device ready to use","text":"
SolarNode can run on a variety of devices. To get started using SolarNode, you must download the appropriate SolarNodeOS image for your device. SolarNodeOS is a complete operating system tailor made for SolarNode. Choose the SolarNodeOS image for the device you want to run SolarNode on and then copy that image to your device media (typically an SD card).
"},{"location":"users/getting-started/#choose-your-device","title":"Choose your device","text":"Raspberry PiOrange PiSomething Else
The Raspberry Pi is the best supported option for general SolarNode deployments. Models 3 or later, Compute Module 3 or later, and Zero 2 W or later are supported. Use a tool like Etcher or Raspberry Pi Imager to copy the image to an SD card (minimum size is 2 GB, 4 GB recommended).
Download SolarNodeOS for Raspberry Pi
The Orange Pi models Zero and Zero Plus are supported. Use a tool like Etcher to copy the image to an SD card (minimum size is 1 GB, 4 GB recommended).
Download SolarNodeOS for Orange Pi
Looking for SolarNodeOS for a device not listed here? Reach out to us through email or Slack to see if we can help!
"},{"location":"users/getting-started/#configure-your-network","title":"Configure your network","text":"
SolarNode needs a network connection. If your device has an ethernet port, that is the most reliable way to get started: just plug in your ethernet cable and off you go!
If you want to use WiFi, or would like more detailed information about SolarNode's networking options, see the Networking sections.
"},{"location":"users/getting-started/#power-it-on","title":"Power it on","text":"
Insert your SD card (or other device media) into your device, and power it on. While it starts up, proceed with the next steps.
"},{"location":"users/getting-started/#associate-your-solarnode-with-solarnetwork","title":"Associate your SolarNode with SolarNetwork","text":"
Every SolarNode must be associated (registered) with a SolarNetwork account. To associate a SolarNode, you must:
Log into SolarNetwork
Generate an invitation for a new SolarNode
Accept the invitation on SolarNode
"},{"location":"users/getting-started/#log-into-solarnetwork","title":"Log into SolarNetwork","text":"
If you do not already have a SolarNetwork account, register for one and then log in.
"},{"location":"users/getting-started/#generate-a-solarnode-invitation","title":"Generate a SolarNode invitation","text":"
Click on the My Nodes link. You will see an Invite New SolarNode button, like this:
Click the Invite New SolarNode button, then fill in and submit the form that appears and select your time zone by clicking on the world map:
The generated SolarNode invitation will appear next.
Select and copy the entire invitation. You will need to paste that into the SolarNode setup screen in the next section.
"},{"location":"users/getting-started/#accept-the-invitation-on-solarnode","title":"Accept the invitation on SolarNode","text":"
Open the SolarNode Setup app in your browser. The URL to use might be http://solarnode/ or it might be an IP address like http://192.168.1.123. See the Networking section for more information. You will be greeted with an invitation acceptance form into which you can paste the invitation you generated in SolarNetwork. The acceptance process goes through the following steps:
Submit the invitation in the acceptance form
Preview the invitation details
Confirm the invitation
Acceptance formPreviewConfirmComplete
First you submit the invitation in the acceptance form.
Next you preview the invitation details.
Note
The expected SolarNetwork Service value shown in this step will be in.solarnetwork.net.
Finally, confirm the invitation. This step contacts SolarNetwork and completes the association process.
Warning
Ensure you provide a Certificate Password on this step, so SolarNetwork can generate a security certificate for your SolarNode.
When these steps are completed, SolarNetwork will have assigned your SolarNode a unique identifier known as your Node ID. A randomly generated SolarNode login password will have been generated; you are given the opportunity to easily change that if you prefer.
Logging in SolarNode is configured in the /etc/solarnode/log4j2.xml file, which is in the log4j configuration format. The default configuration in SolarNodeOS sets the overall verbosity to INFO and logs to a temporary storage area /run/solarnode/log/solarnode.log.
Log messages have the following general properties:
Component Example Description Timestamp 2022-03-15 09:05:37,029 The date/time the message was generated. Note the format of the timestamp depends on the logging configuration; the SolarNode default is shown in this example. Level INFO The severity/verbosity of the message (as determined by the developer). This is an enumeration, and from least-to-most severe: TRACE, DEBUG, INFO, WARN, ERROR. The level of a given logger allows messages with that level or higher to be logged, while lower levels are skipped. The default SolarNode configuration sets the overal level to INFO, so TRACE and DEBUG messages are not logged. Logger ModbusDatumDataSource A category or namespace associated with the message. Most commonly these equate to Java class names, but can be any value and is determined by the developer. Periods in the logger name act as a delimiter, forming a hierarchy that can be tuned to log at different levels. For example, given the default INFO level, configuring the net.solarnetwork.node.io.modbus logger to DEBUG would turn on debug-level logging for all loggers in the Modbus IO namespace. Note that the default SolarNode configuration logs just a fixed number of the last characters of the logger name. This can be changed in the configuration to log more (or all) of the name, as desired. Message Error reading from device. The message itself, determined by the developer. Exception Some messages include an exception stack trace, which shows the runtime call tree where the exception occurred."},{"location":"users/logging/#logger-namespaces","title":"Logger namespaces","text":"
The Logger component outlined in the previous section allows a lot of flexibility to configure what gets logged in SolarNode. Setting the level on a given namespace impacts that namespace as well as all namespaces beneath it, meaning all other loggers that share the same namespace prefix.
For example, imagine the following two loggers exist in SolarNode:
Given the default configuration sets the default level to INFO, we can turn in DEBUG logging for both of these by adding a <Logger> line like the following within the <Loggers> element:
That turns on DEBUG for both loggers because they are both children of the net.solarnetwork.node.io.modbus namespace. We could turn on TRACE logging for one of them like this:
That would also turn on TRACE for any other loggers in the net.solarnetwork.node.io.modbus.serial namespace. You can limit the configuration all the way down to a full logger name if you like, for example:
The SolarNode UI supports configuring logger levels dynamically, without having to change the logging configuration file. See the Setup App / Settings / Logging page for more information.
The default SolarNode configuration automatically rotates log files based on size, and limits the number of historic log files kept around, to that its associated storage space is not filled up. When a log file reaches the file limit, it is renamed to include a -i.log suffix, where i is an offset from the current log. The default configuration sets the maximum log size to 1 MB and limits the number of historic files to 3.
You can also adjust how much history is saved by tweaking the <SizeBasedTriggeringPolicy> and <DefaultRolloverStrategy> configuration. For example to change to a limit of 9 historic files of at most 5 MB each, the configuration would look like this:
By default SolarNode logs to temporary (RAM) storage that is discarded when the node reboots. The configuration can be changed so that logs are written directly to persistent storage if you would like to have the logs persisted across reboots, or would like to preserve more log history than can be stored in the temporary storage area.
To make this change, update the <RollingFile> element's fileName and/or filePattern attributes to point to a persistent filesystem. SolarNode already has write permission to the /var/lib/solarnode/var directory, so an easy location to use is /var/lib/solarnode/var/log, like this:
This configuration can add a lot of stress to the node's storage medium, and may shorten its useful life. Consumer-grade SD cards in particular can fail quickly if SolarNode is writting a lot of information, such as verbose logging. Use of this configuration should be used with caution.
"},{"location":"users/logging/#logging-example-split-across-multiple-files","title":"Logging example: split across multiple files","text":"
Sometimes it can be useful to turn on verbose logging for some area of SolarNode, but have those messages go to a different file so they don't clog up the main solarnode.log file. This can be done by configuring additional appender configurations.
The following example logging configuration creates the following log files:
/var/log/solarnode/solarnode.log - the main log
/var/log/solarnode/filter.log - filter logging
/var/log/solarnode/mqtt-solarin.log - MQTT wire logging to SolarIn
/var/log/solarnode/mqtt-solarflux.log - MQTT wire logging to SolarFlux
First you must create the /var/log/solarnode directory and give SolarNode permission to write there:
sudo mkdir /var/log/solarnode\nsudo chgrp solar /var/log/solarnode\nsudo chmod g+w /var/log/solarnode\n
Then edit the /etc/solarnode/log4j2.xml file to hold the following (adjust according to your needs):
The File appender is the \"main\" application log where most logs should go.
The Filter appender is where we want net.solarnetwork.node.datum.filter messages to go.
The MQTT appender is where we want net.solarnetwork.mqtt.queue messages to go.
The Flux appender is where we want net.solarnetwork.mqtt.influx messages to go.
Here we include additivity=\"false\" and add the <AppenderRef> element that refereneces the specific appender name we want the log messages to go to. The additivity=false attribute means the log messages will only go to the Filter appender, instead of also going to the root-level File appender.
The root-level appender is the \"default\" destination for log messages, unless overridden by a specific appender like we did for the Filter, MQTT, and Flux appenders above.
The various <AppenderRef> elements configure the appender name to write the messages to.
The various additivity=\"false\" attributes disable appender additivity which means the log message will only be written to one appender, instead of being written to all configured appenders in the hierarchy (for example the root-level appender).
The immediateFlush=\"false\" turns on buffered logging, which means log messages are buffered in RAM before being flushed to disk. This is more forgiving to the disk, at the expense of a delay before the messages appear.
MQTT wire logging means the raw MQTT packets send and received over MQTT connections will be logged in an easy-to-read but very verbose format. For the MQTT wire logging to be enabled, it must be activated with a special configuration file. Create the /etc/solarnode/services/net.solarnetwork.common.mqtt.netty.cfg file with this content:
MQTT wire logs use a namespace prefix net.solarnetwork.mqtt. followed by the connection's host name or IP address and port. For example SolarIn messages would use net.solarnetwork.mqtt.queue.solarnetwork.net:8883 and SolarFlux messages would use net.solarnetwork.mqtt.influx.solarnetwork.net:8884.
SolarNode will attempt to automatically configure networking access from a local DHCP server. For many deployments the local network router is the DHCP server. SolarNode will identify itself with the name solarnode, so in many cases you can reach the SolarNode setup app at http://solarnode/.
To find what network address SolarNode is using, you have a few options:
"},{"location":"users/networking/#consult-your-network-router","title":"Consult your network router","text":"
Your local network router is very likely to have a record of SolarNode's network connection. Log into the router's management UI and look for a device named solarnode.
"},{"location":"users/networking/#connect-a-keyboard-and-screen","title":"Connect a keyboard and screen","text":"
If your SolarNode supports connecting a keyboard and screen, you can log into the SolarNode command line console and run ip -br addr to print out a brief summary of the current networking configuration:
$ ip -br addr\n\nlo UNKNOWN 127.0.0.1/8 ::1/128\neth0 UP 192.168.0.254/24 fe80::e65f:1ff:fed1:893c/64\nwlan0 DOWN\n
In the previous output, SolarNode has an ethernet device eth0 with a network address 192.168.0.254 and a WiFi device wlan0 that is not connected. You could reach that SolarNode at http://192.168.0.254/.
Tip
You can get more details by running ip addr (without the -br argument).
If your device will use WiFi for network access, you will need to configure the network name and credentials to use. You can do that by creating a wpa_supplicant.conf file on the SolarNodeOS media (typically an SD card). For Raspberry Pi media, you can mount the SD card on your computer and it will mount the appropriate drive for you.
Once mounted use your favorite text editor to create a wpa_supplicant.conf file with content like this:
country=nz\nnetwork={\n ssid=\"wifi network name here\"\n psk=\"wifi password here\"\n}\n
Change the country=nz to match your own country code.
SolarNode supports a concept called operational modes. Modes are simple names like quiet and hyper that can be either active or inactive. Any number of modes can be active at a given time. In theory both quiet and hyper could be active simultaneously. Modes can be named anything you like.
Modes can be used by SolarNode components to alter their behavior dynamically. For example a data source component might stop collecting data from a set of data sources if the quiet mode is active, or start collecting data at an increased frequency if hyper is active. Some components might require specific names, which are described in their documentation. Components that allow configuring a required operational mode setting can also invert the requirement by adding a ! prefix to the mode name, for example !hyper can be thought of as \"when hyper is not active\". You can also specify exactly ! to match only when no mode is active.
Datum Filters also make use of operational modes, to toggle filters on and off dynamically.
Operational modes can be activated with an associated expiration date. The mode will remain active until the expiration date, at which time it will be automatically deactivated. A mode can always be manually deactivated before its associated expiration date.
The SolarUser Instruction API can be used to toggle operational modes on and off. The EnableOperationalModes instruction activates modes and DisableOperationalModes deactivates them.
SolarNode supports placeholders in some setting values, such as datum data source IDs. These allow you to define a set of parameters that can be consistently applied to many settings.
For example, imagine you manage many SolarNode devices across different buildings or sites. You'd like to follow a naming convention for your datum data source ID values that include a code for the building the node is deployed in, along the lines of /BUILDING/DEVICE. You could define a placeholder building and then configure the source IDs like /{building}/device. On each node you'd define the building placeholder with a building-specific value, so at runtime the nodes would resolve actual source ID values with those names replacing the {building} placeholder, for example /OFFICE1/meter.
Placeholders are written using the form {name:default} where name is the placeholder name and default is an optional default value to apply if no placeholder value exists for the given name. If a default value is not needed, omit the colon so the placeholder becomes just {name}.
For example, imagine a set of placeholder values like
Name Value building OFFICE1 room BREAK
Here are some example settings with placeholders with what they would resolve to:
Input Resolved value /{building}/meter/OFFICE1/meter/{building}/{room}/temp/OFFICE1/BREAK/temp/{building}/{floor:1}/{room}/OFFICE1/1/BREAK"},{"location":"users/placeholders/#static-placeholder-configuration","title":"Static placeholder configuration","text":"
SolarNode will look for placeholder values defined in properties files stored in the conf/placeholders.d directory by default. In SolarNodeOS this is the /etc/solarnode/placeholders.d directory.
Warning
These files are only loaded once, when SolarNode starts up. If you make changes to any of them then SolarNode must be restarted.
The properties file names must have a .properties extension and follow Java properties file syntax. Put simply, each file contains lines like
name = value\n
where name is the placeholder name and value is its associated value. The example set of placeholder values shown previously could be defined in a /etc/solarnode/placeholders.d/mynode.properties file with this content:
SolarNode also supports storing placeholder values as Settings using the key placeholder. The SolarUser /instruction/add API can be used with the UpdateSetting topic to modify the placeholder values as needed. The type value is the placeholder name and the value the placeholder value. Placeholders defined this way have priority over any similarly-named placeholders defined statically. Changes take effect as soon as SolarNode receives and processes the instruction.
Warning
Once a placeholder value is set via the UpdateSetting instruction, the same value defined as a static placeholder will be overridden and changes to the static value will be ignored.
For example, to set the floor placeholder to 2 on node 123, you could make a POST request to /solaruser/api/v1/sec/instr/add/UpdateSetting with the following JSON body:
SolarSSH is SolarNetwork's method of connecting to SolarNode devices over the internet even when those devices are not directly reachable due to network firewalls or routing rules. It uses the Secure Shell Protocol (SSH) to ensure your connection is private and secure.
SolarSSH does not maintain permanently open SSH connections to SolarNode devices. Instead the connections are established on demand, when you need them. This allows you to connect to a SolarNode when you need to perform maintenance, but not require SolarNode maintain an open SSH connection to SolarSSH.
In order to use SolarSSH, you will need a User Security Token to use for authentication.
You can use SolarSSH right in your browser to connect to any of your nodes.
The SolarSSH browser app
"},{"location":"users/remote-access/#choose-your-node-id","title":"Choose your node ID","text":"
Click on the node ID in the page title to change what node you want to connect to.
Changing the SolarSSH node ID
Bookmark a SolarSSH page for your node ID
You can append a ?nodeId=X to the SolarSSH browser URL https://go.solarnetwork.net/solarssh/, where X is a node ID, to make the app start with that node ID directly. For example to start with node 123, you could bookmark the URL https://go.solarnetwork.net/solarssh/?nodeId=123.
"},{"location":"users/remote-access/#provide-your-credentials","title":"Provide your credentials","text":"
Fill in User Security Token credentials for authentication. The node ID you are connecting to must be owned by the same account as the security token.
Click the Connect button to initiate the SolarSSH connection process. You will be presented with a dialog form to provide your SolarNodeOS system account credentials. This is only necessary if you want to connect to the SolarNodeOS system command line. If you only need to access the SolarNode Setup App, you can click the Skip button to skip this step. Otherwise, click the Login button to log into the system command line.
SolarNodeOS system account credentials form
SolarSSH will then establish the connection to your node. If you provided SolarNodeOS system account credentials previously and clicked the Login button, you will end up with a system command prompt, like this:
Once connected, you can access the remote node's Setup App by clicking the Setup button in the top-right corner of the window. This will open a new browser tab for the Setup App.
Accessing the SolarNode Setup App through a SolarSSH web connection
SolarSSH also supports a \"direct\" connection mode, that allows you to connect using standard ssh client applications. This is a more advanced (and flexible) way of connecting to your nodes, and even allows you to access other network services on the same network as the node and provides full SSH integration including port forwarding, scp, and sftp support.
Direct SolarSSH connections require using a SSH client that supports the SSH \"jump\" host feature. The \"jump\" server hosted by SolarNetwork Foundation is available at ssh.solarnetwork.net:9022.
The \"jump\" connection user is formed by combining a node ID with a user security token, separated by a : character. The general form of a SolarSSH direct connection \"jump\" host thus looks like this:
NODE:TOKEN@ssh.solarnetwork.net:9022\n
where NODE is a SolarNode ID and TOKEN is a SolarNetwork user security token.
The actual SolarNode user can be any OS user (typically solar) and the hostname can be anything. A good practice for the hostname is to use one derived from the SolarNode ID, e.g. solarnode-123.
Using OpenSSH a complete connection command to log in as a solar user looks like this, passing the \"jump\" host via a -J argument:
SolarNetwork security tokens often contain characters that must be escaped with a \\ character for your shell to interpret them correctly. For example, a token like 9gPa9S;Ux1X3kK)YN6&g might need to have the ;)& characters escaped like 9gPa9S\\;Ux1X3kK\\)YN6\\&g.
You will be first prompted to enter a password, which must be the token secret. You might then be prompted for the SolarNode OS user's password. Here's an example screen shot:
Accessing the SolarNode system command line through a SolarSSH direct connection
If you find yourself using SolarSSH connections frequently, a handy bash or zsh shell function can help make the connection process easier to remember. Here's an example that give you a solarssh command that accepts a SolarNode ID argument, followed by any optional SSH arguments:
function solarssh () {\nlocal node_id=\"$1\"\nif [ -z \"$node_id\" ]; then\necho 'Must provide node ID , e.g. 123'\nelse\nshift\necho \"Enter SN token secret when first prompted for password. Enter node $node_id password second.\"\nssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \\\n-o LogLevel=ERROR -o NumberOfPasswordPrompts=1 \\\n-J \"$node_id\"'SN_TOKEN_HERE@ssh.solarnetwork.net:9022' \\\n$@ solar@solarnode-$node_id\nfi\n}\n
Just replace SN_TOKEN_HERE with a user security token. After integrating this into your shell's configuration (e.g. ~/.bashrc or ~/.zshrc) then you could connect to node 123 like:
PuTTY is a popular tool for Windows that supports SolarSSH connections. To connect to a SolarNode using PuTTY, you must:
Configure a SSH connection proxy to ssh.solarnetwork.net:9022 using a username like NODE_ID:TOKEN_ID and the corresponding token secret as the password.
Optionally configure a tunnel to localhost:8080 to access the SolarNode Setup App
Configure the session to connect to solarnode-NODE_ID on port 22
Open the Connection > Proxy configuration category in PuTTY, and configure the following settings:
Setting Value Proxy type SSH to proxy and use port forwarding Proxy hostname ssh.solarnetwork.net Port 9022 Username The desired node ID, followed by a :, followed by a user security token ID, that is: NODE_ID:TOKEN_ID Password The user security token secret.
To access the SolarNode Setup App, you can configure PuTTY to foward a port on your local machine to localhost:8080 on the node. Once the SSH connection is established, you can open a browser to http://localhost:PORT to access the SolarNode Setup App. You can use any available local port, for example if you used port 8888 then you would open a browser to http://localhost:8888 to access the SolarNode Setup App.
Open the Connection > SSH > Tunnels configuration category in PuTTY, and configure the following settings:
Setting Value Source port A free port on your machine, for example 8888. Destination localhost:8080 Add You must click the Add button to add this tunnel. You can then add other tunnels as needed.
Finally under the Session configuration category in PuTTY, configure the Host Name and Port to connect to SolarNode. You can also provide a session name and click the Save button to save all the settings you have configured, making it easy to load them in the future.
Setting Value Host Name Does not actually matter, but a name like solarnode-NODE_ID is helpful, where NODE_ID is the ID of the node you are connecting to. Port 22
Confiruing PuTTY session settings
"},{"location":"users/remote-access/#putty-open-connection","title":"PuTTY open connection","text":"
On the Session configuration category, click the Open button to establish the SolarSSH connection. You might be prompted to confirm the identity of the ssh.solarnetwork.net server first. Click the Accept button if this is the case.
PuTTY host verification alert
PuTTY will connect to SolarSSH and after a short while prompt you for the SolarNodeOS user you would like to connect to SolarNode with. Typically you would use the solar account, so you would type solar followed by Enter. You will then be prompted for that account's password, so type that in and type Enter again. You will then be presented with the SolarNodeOS shell prompt.
PuTTY node login
Assuming you configured a SSH tunnel on port 8888 to localhost:8080, you can now open http://localhost:8888 to access the SolarNode Setup App.
Once connected to SolarSSH, access the SolarNode Setup App in your browser.
Some SolarNode features require SolarNetwork Security Tokens to use as authentication credentails for SolarNetwork services. Security Tokens are managed on the Security Tokens page in SolarNetwork.
User Security Tokens allow access to web services that perform functions directly on your behalf, for example issue an instruction to your SolarNode.
Click the \"+\" button in the User Tokens section to generate a new security token. You will be shown a form where you can give a name, description, and policy restrictions for the token.
The form for creating a new User Security Token
Click the Generate Security Token button to generate the new token. You will then be shown the generated token. You will need to copy and save the token to a safe and secure place.
A newly generated security token \u2014 make sure to save the token in a safe place
Data Security Tokens allow access to web services that query the data collected by your SolarNodes.
Click the \"+\" button in the Data Tokens section to generate a new security token. You will be shown a form where you can give a name, description, and policy restrictions for the token.
The form for creating a new Data Security Token
Click the Generate Security Token button to generate the new token. You will then be shown the generated token. You will need to copy and save the token to a safe and secure place.
Security tokens can be configured with a Security Policy that restricts the types of functions or data the token has permission to access.
Policy User Node Description API Paths Restrict the token to specific API methods. Expiry Make the token invalid after a specific date. Minimum Aggregation Restrict the data aggregation level allowed. Node IDs Restrict to specific node IDs. Refresh Allowed Make the token invalid after a specific date. Source IDs Restrict to specific datum source IDs. Node Metadata Restrict to specific node metadata. User Metadata Restrict to specific user metadata."},{"location":"users/security-tokens/#api-paths","title":"API Paths","text":"
The API Paths policy restricts the token to specific SolarNet API methods, based on their URL path. If this policy is not included then all API methods are allowed.
The Minimum Aggregation policy restricts the token to a minimum data aggregation level. If this policy is not included, or of the minimum level is set to None, data for any aggregation level is allowed.
The Node IDspolicy restrict the token to specific node IDs. If this policy is not included, then the token has access to all node IDs in your SolarNetwork account.
The Node Metadata policy restricts the token to specific portions of node-level metadata. If this policy is not included then all node metadata is allowed.
The Refresh Allowed policy grants applications given a signing key rather than the token's private password can refresh the key as long as the token has not expired.
The Source IDs policy restrict the token to specific datum source IDs. If this policy is not included, then the token has access to all source IDs in your SolarNetwork account.
The User Metadata policy restricts the token to specific portions of account-level metadata. If this policy is not included then all user metadata is allowed.
SolarNode plugins support configurable properties, called settings. The SolarNode setup app allows you to manage settings through simple web forms.
Settings can also be exported and imported in a CSV format, and can be applied when SolarNode starts up with Auto Settings CSV files. Here is an example of a settings form in the SolarNode setup app:
There are 3 settings represented in that screen shot:
Schedule
Destination
Temporary Destination
Tip
Nearly every form field you can edit in the SolarNode setup app represents a setting for a component in SolarNode.
In the SolarNode setup app the settings can be imported and exported from the Settings > Backups screen in the Settings Backup & Restore section:
Settings files are CSV (comma separated values) files, easily exported from spreadsheet applications like Microsoft Excel or Google Sheets. The CSV must include a header row, which is skipped. All other rows will be processed as settings.
The Settings CSV format uses a quite general format and contains the following columns:
# Name Description 1 key A unique identifier for the service the setting applies to. 2 type A unique identifier for the setting with the service specified by key, typically using standard property syntax. 3 value The setting value. 4 flags An integer bitmask of flags associated with the setting. See the flags section for more info. 5 modified The date the setting was last modified, in yyyy-MM-dd HH:mm:ss format.
To understand the key and type values required for a given component requires consulting the documentation of the plugin that provides that component. You can get a pretty good picture of what the values are by exporting the settings after configuring a component in SolarNode. Typically the key value will mirror a plugin's Java package name, and type follows a JavaScript-like property accessor syntax representing a configurable property on the component.
The type setting value usually defines a component property using a JavaScript-like syntax with these rules:
Expression Example Description Property name a property named name Nested property name.subname a nested property subname on a parent property name List property name[0] the first element of an indexed list property named name Map property name['key'] the key element of the map property name
These rules can be combined into complex expressions, for example propIncludes[0].name or delegate.connectionFactory.propertyFilters['UID'].
Each setting has a set of flags that can be associated with it. The following table outlines the bit offset for each flag along with a description:
# Name Description 0 Ignore modification date If this flag is set then changes to the associated setting will not trigger a new auto backup. 1 Volatile If this flag is set then changes to the associated setting will not trigger an internal \"setting changed\" event to be broadcast.
Note these are bit offsets, so the decimal value to ignore modification date is 1, to mark as volatile is 2, and for both is 3.
Many plugins provide component factories which allow you to configure any number of instances of that component. Each component instance is assigned a unique identifier when it is created. In the SolarNode setup app, the component instance identifiers appear throughout the UI:
In the previous example CSV the Modbus I/O plugin allows you to configure any number of Modbus connection components, each with their own specific settings. That is an example of a component factory. The settings CSV will include a special row to indicate that such a factory component should be activated, using a unique identifier, and then all the settings associated with that factory instance will have that unique identifier appended to its key values.
Going back to that example CSV, this is the row that activates a Modbus I/O component instance with an identifier of 1:
The synax for key column is simply the service identifier followed by .FACTORY. Then the type and value columns are both set the same unique identifier. In this example that identifier is 1. For all settings specific to a factory component, the key column will be the service identifier followed by .IDENTIFIER where IDENTIFIER is the unique instance identifier.
Here is an example that shows two factory instances configured: Lighting and HVAC. Each have a different serialParams.portName setting value configured:
SolarNode settings can also be configured through Auto Settings, applied when SolarNode starts up, by placing Settings CSV files in the /etc/solarnode/auto-settings.d directory. These settings are applied only if they don't already exist or the modified date in the settings file is newer than the date they were previously applied.
SolarFlux is the name of a real-time cloud-based service for datum using a publish/subscribe integration model. SolarNode supports publishing datum to SolarFlux and your own applications can subscribe to receive datum messages as they are published.
SolarFlux is based on MQTT. To integrate with SolarFlux you use a MQTT client application or library. See the SolarFlux Integration Guide for more information.
Each datum message is published as a CBOR encoded map by default, to a MQTT topic based on the datum's source ID. This is essentially a JSON object. The map keys are the datum property names. You can configure a Datum Encoder to encode datum into a different format, by configuring a filter. For example, the Protobuf Datum Encoder supports encoding datum into Protobuf messages.
Messages are published with the MQTT retained flag set by default, which means the most recently published datum is saved by SolarFlux. When an application subscribes to a topic it will immediately receive any retained message for that topic. In this way, SolarFlux will provide a \"most recent\" snapshot of all datum across all nodes and sources.
Example SolarFlux datum message, expressed as JSON
The MQTT topic each datum is published to is derived from the node ID and datum source ID, according to this pattern:
node/N/datum/A/S\n
Pattern Element Description N The node ID the datum was captured on A An aggregation key; will be 0 for the \"raw\" datum captured in SolarNode S The datum source ID; note that any leading / in the source ID is stripped from the topic Example MQTT topics
"},{"location":"users/solarflux/#log-datum-stream","title":"Log datum stream","text":"
The EventAdmin Appender is supported, and log events are turned into a datum stream and published to SolarFlux. The log timestamps are used as the datum timestamps.
"},{"location":"users/solarflux/#log-datum-stream-topic-mapping","title":"Log datum stream topic mapping","text":"
The topic assigned to log events is log/ with the log name appended. Period characters (.) in the log name are replaced with slash characters (/). For example, a log name net.solarnetwork.node.datum.modbus.ModbusDatumDataSource will be turned into the topic log/net/solarnetwork/node/datum/modbus/ModbusDatumDataSource.
"},{"location":"users/solarflux/#log-datum-stream-properties","title":"Log datum stream properties","text":"
The datum stream consists of the following properties:
Property Class. Type Description levels String The log level name, e.g. TRACE, DEBUG, INFO, WARN, ERROR, or FATAL. priorityi Integer The log level priority (lower values have more priority), e.g. 600, 500, 400, 300, 200, or 100. names String The log name. msgs String The log message . exMsgs String An exception message, if an exception was included. exSts String A newline-delimited list of stack trace element values, if an exception was included."},{"location":"users/solarflux/#settings","title":"Settings","text":"
The SolarFlux Upload Service ships with default settings that work out-of-the-box without any configuration. There are many settings you can change to better suit your needs, however.
Each component configuration contains the following overall settings:
Setting Description Host The URI for the SolarFlux server to connect to. Normally this is influx.solarnetwork.net:8884. Username The MQTT username to use. Normally this is solarnode. Password The MQTT password to use. Normally this is not needed as the node's certificate it used for authentication. Exclude Properties A regular expression to match property names on all datum sources to exclude from publishing. Required Mode If configured, an operational mode that must be active for any data to be published. Maximum Republish If offline message persistence has been configured, then the maximum number of offline messages to publish in one go. See the offline persistence section for more information. Reliability The MQTT quality of service level to use. Normally the default of At most once is sufficient. Version The MQTT protocol version to use. Startig with version 5 MQTT topic aliases will be used if the server supports it, which can save a significant amount of network bandwidth when long source IDs are in use. Retained Toggle the MQTT retained message flag. When enabled the MQTT server will store the most recently published message on each topic so it is immediately available when clients connect. Wire Logging Toggle verbose logging on/off to support troubleshooting. The messages are logged to the net.solarnetwork.mqtt topic at DEBUG level. Filters Any number of datum filter configurations.
For TLS-encrypted connections, SolarNode will make the node's own X.509 certificate available for client authentication.
Each component can define any number of filters, which are used to manipulate the datum published to SolarFlux, such as:
restrict the frequency at which individual datum sources are published
restrict which properties of datum are posted
encode the message into something other than CBOR
The filter settings can be very useful to constrain how much data is sent to SolarFlux, for example on nodes using mobile internet connections where the cost of posting data is high.
A filter can configure a Datum Encoder to encode the MQTT message with, if you want to use a format other than the default CBOR encoding. This can be combined with a Source ID pattern to encode specific sources with specific encoders. For example when using the Protobuf Datum Encoder a single Protobuf message type is supported per encoder. If you want to encode different datum sources into different Protobuf messages, you would configure one encoder per message type, and then one filter per source ID with the corresponding encoder.
Note
All filters are applied in the order they are defined, and then the first filter with a Datum Encoder configured that matches the filter's Source ID pattern will be used to encode the datum. If not Datum Encoder is configured the default CBOR encoding will be used.
Each filter configuration contains the following settings:
Setting Description Source ID A case-insensitive regular expression to match against datum source IDs. If defined, this filter will only be applied to datum with matching source ID values. If not defined this filter will be applied to all datum. For example ^solar would match any source ID starting with solar. Datum Filter The Service Name of a Datum Filter component to apply before encoding and posting datum. Required Mode If configured, an operational mode that must be active for this filter to be applied. Datum Encoder The Service Name if a Datum Encoder component to encode datum with. The encoder will be passed a java.util.Map object with all the datum properties. If not configured then CBOR will be used. Limit Seconds The minimum number of seconds to limit datum that match the configured Source ID pattern. If datum are produced faster than this rate, they will be filtered out. Set to 0 or leave empty for no limit. Property Includes A list of case-insensitive regular expressions to match against datum property names. If configured, only properties that match one of these expressions will be included in the filtered output. For example ^watt would match any property starting with watt. Property Excludes A list of case-insensitive regular expressions to match against datum property names. If configured, any property that match one of these expressions will be excluded from the filtered output. For example ^temp would match any property starting with temp. Exclusions are applied after property inclusions.
Warning
The datum sourceId and created properties will be affected by the property include/exclude filters! If you define any include filters, you might want to add an include rule for ^created$. You might like to have sourceId removed to conserve bandwidth, given that value is part of the MQTT topic the datum is posted on and thus redundant.
By default if the connection to the SolarFlux server is down for any reason, all messages that would normally be published to the server will be discarded. This is suitable for most applications that rely on SolarFlux to view real-time status updates only, and SolarNode uploads datum to SolarNet for long-term persistence. For applications that rely on SolarFlux for more, it might be desirable to configure SolarNode to locally cache SolarFlux messages when the connection is down, and then publish those cached messages when the connection is restored. This can be accomplished by deploying the MQTT Persistence plugin.
When that plugin is available, all messages processed by this service will be saved locally when the MQTT connection is down, and then posted once the MQTT connection comes back up. Note the following points to consider:
The cached messages will be posted with the MQTT retained flag set to false.
The cached messages will be posted in an unspecified order.
The cached messages may be posted more than once, regardless of the configured Reliabiliy setting.
Datum Filters are services that manipulate datum generated by SolarNode plugins before they are uploaded to SolarNet. Datum Filters vary wildly in the functionality they provide; here are some examples of the things they can do:
Throttle the rate at which datum are saved to SolarNet
Remove unwanted properties from datum
Split a datum so some properties are moved to another datum stream
Join the properties of multiple datum streams into a single datum
Inject properties from external services
Derive new properties from dynamic expressions
Datum Filters do not create datum
It is helpful to remember that Datum Filters do not create datum, they only manipulate datum created elsewhere, typically by datum data sources.
There are four main places where datum filters can be applied:
On the Datum Queue, immediately after each datum is captured
As a Global Datum Filter, just before uploading to SolarNet
On the Global Datum Filter Chain, just before uploading to SolarNet
As a SolarFlux Datum Filter, just before uploading to SolarFlux
All datum generated by SolarNode plugins are added to the Datum Queue for processing. The datum are processed in the order they are added to the queue. Datum Filters are applied to each datum, each filter's result passed to the next available filter until all filters have been applied.
Conceptual diagram of the Datum Queue, processing datum along with filters manipulating them
At the end of processing, the datum is either
uploaded to SolarNet immediately, or
saved locally, to be uploaded at some point in the future
Most of the time datum are uploaded to SolarNet immediately after processing. If the network is down, or SolarNode is configured to only upload datum in batches, then datum are saved locally in SolarNode, and a periodic job will attempt to upload them later on, in batches.
See the Setup App Datum Queue section for information on how to configure the Datum Queue.
When to configure filters on the Datum Queue, as opposed to other places?
The Datum Queue is a great place to configure filters that must be processed at most once per datum, and do not depend on what time the datum is uploaded to SolarNet.
"},{"location":"users/datum-filters/#global-datum-filters","title":"Global Datum Filters","text":"
Global Datum Filters are applied to datum just before posting to SolarNetwork. Once an instance is created, it is automatically active and will be applied to datum. This differs from User Datum Filters, which must be explicitly added to a service to be used, either dircectly or indirectly with a Datum Filter Chain.
Note
Some filters support both Global and User based filter configuration, and often you can achieve the same overall result in multiple ways. Global filters are convenient for the subset of filters that support Global configuration, but for complex filtering often it can be easier to configure all filters as User filters, using the Global Datum Filter Chain as needed.
"},{"location":"users/datum-filters/#global-datum-filter-chain","title":"Global Datum Filter Chain","text":"
The Global Datum Filter Chain provides a way to apply explicit User Datum Filters to datum just before posting to SolarNetwork.
"},{"location":"users/datum-filters/#solarflux-datum-filters","title":"SolarFlux Datum Filters","text":"
The Datum Filter Chain is a User Datum Filter that you configure with a list, or chain, of other User Datum Filters. When the Filter Chain executes, it executes each of the configured Datum Filters, in the order defined. This filter can be used like any other Datum Filter, allowing multiple filters to be applied in a defined order.
A Filter Chain acts like an ordered group of Datum Filters
Tip
Some services support configuring only a single Datum Filter setting. You can use a Filter Chain to apply multiple filters in those services.
Each filter configuration contains the following overall settings:
Setting Description Available Filters A read-only list of Service Name values of User Datum Filter components that have been configured. You can copy any value from this list and paste it into the Datum Filters list to include that filter in the chain. Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Datum Filters The list of Service Name values of User Datum Filter components to apply to datum."},{"location":"users/datum-filters/control-updater/","title":"Control Updater Datum Filter","text":"
The Control Updater Datum Filter provides a way to update controls with the result of an expression, optionally populating the expression result as a datum property.
This filter is provided by the Standard Datum Filters plugin.
The screen shot shows a filter that would toggle the /power/switch/1 control on/off based on the frequency property in the /power/1 datum stream: on when the frequency is 50 or higher, off otherwise.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Control Configurations A list of control expression configurations.
Each control configuration contains the following settings:
Setting Description Control ID The ID of the control to update with the expression result. Property The optional datum property to store the expression result in. Property Type The datum property type to use. Expression The expression to evaluate. See below for more info. Expression Language The expression language to write Expression in."},{"location":"users/datum-filters/control-updater/#expressions","title":"Expressions","text":"
See the Expressions guide for general expressions reference. The root object is a DatumExpressionRoot that lets you treat all datum properties, and filter parameters, as expression variables directly.
"},{"location":"users/datum-filters/downsample/","title":"Downsample Datum Filter","text":"
The Downsample Datum Filter provides a way to down-sample higher-frequency datum samples into lower-frequency (averaged) datum samples. The filter will collect a configurable number of samples and then generate a down-sampled sample where an average of each collected instantaneous property is included. In addition minimum and maximum values of each averaged property are added.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/downsample/#settings","title":"Settings","text":"Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Sample Count The number of samples to average over. Decimal Scale A maximum number of digits after the decimal point to round to. Set to0 to round to whole numbers. Property Excludes A list of property names to exclude. Min Property Template A string format to use for computed minimum property values. Use %s as the placeholder for the original property name, e.g. %s_min. Max Property Template A string format to use for computed maximum property values. Use %s as the placeholder for the original property name, e.g. %s_max."},{"location":"users/datum-filters/expression/","title":"Expression Datum Filter","text":"
The Expression Datum Filter provides a way to generate new properties by evaluating expressions against existing properties.
This filter is provided by the Standard Datum Filters plugin.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Expressions A list of expression configurations that are evaluated to derive datum property values from other property values.
Use the + and - buttons to add/remove expression configurations.
Each expression configuration contains the following settings:
Setting Description Property The datum property to store the expression result in. Property Type The datum property type to use. Expression The expression to evaluate. See below for more info. Expression Language The expression language to write Expression in."},{"location":"users/datum-filters/expression/#expressions","title":"Expressions","text":"
See the SolarNode Expressions guide for general expressions reference. The root object is a DatumExpressionRoot that lets you treat all datum properties, and filter parameters, as expression variables directly, along with the following properties:
Property Type Description datumDatum A Datum object, populated with data from all property and virtual meter configurations. propsMap<String,Object> Simple Map based access to the properties in datum, and transform parameters, to simplify expressions.
The following methods are available:
Function Arguments Result Description has(name)Stringboolean Returns true if a property named name is defined. hasLatest(source)Stringboolean Returns true if a datum with source ID source is available via the latest(source) function. latest(source)StringDatumExpressionRoot for the latest available datum matching the given source ID, or null if not available."},{"location":"users/datum-filters/expression/#expression-examples","title":"Expression examples","text":"
Assuming a datum sample with properties like the following:
Property Value current7.6voltage240.1statusError
Then here are some example expressions and the results they would produce:
Expression Result Comment voltage * current1824.76 Simple multiplication of two properties. props['voltage'] * props['current']1824.76 Another way to write the previous expression. Can be useful if the property names contain non-alphanumeric characters, like spaces. has('frequency') ? 1 : nullnull Uses the ?: if/then/else operator to evaluate to null because the frequency property is not available. When an expression evaluates to null then no property will be added to the output samples. current > 7 or voltage > 245 ? 1 : null1 Uses comparison and logic operators to evaluate to 1 because current is greater than 7. voltage * currrent * (hasLatest('battery') ? 1.0 - latest('battery')['soc'] : 1)364.952 Assuming a battery datum with a soc property value of 0.8 then the expression resolves to 7.6 * 241.0 * (1.0 - 0.8)."},{"location":"users/datum-filters/join/","title":"Join Datum Filter","text":"
The Join Datum Filter provides a way to merge the properties of multiple datum streams into a new derived datum stream.
This filter is provided by the Standard Datum Filters plugin.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Output Source ID The source ID of the merged datum stream. Placeholders are allowed. Coalesce Threshold When 2 or more then wait until datum from this many different source IDs have been encountered before generating an output datum. Once a coalesced datum has been generated the tracking of input sources resets and another datum will only be generated after the threshold is met again. If 1 or less, then generate output datum for all input datum. Swallow Input If enabled, then filter out input datum after merging. Otherwise leave the input datum as-is. Source Property Mappings A list of source IDs with associated property name templates to rename the properties with. Each template must contain a {p} parameter which will be replaced by the property names merged from datum encountered with the associated source ID. For example {p}_s1 would map an input property watts to watts_s1.
Use the + and - buttons to add/remove expression configurations.
Each source property mapping configuration contains the following settings:
Setting Description Source ID A source ID pattern to apply the associated Mapping to. Any capture groups (parts of the pattern between () groups) are provided to the Mapping template. Mapping A property name template with a {p} parameter for an input property name to be mapped to a merged (output) property name. Pattern capture groups from Source ID are available starting with {1}. For example {p}_s1 would map an input property watts to watts_s1.
Unmapped properties are copied
If a matching source property mapping does not exist for an input datum source ID then the property names of that datum are used as-is.
The Source ID pattern can define capture groups that will be provided to the Mapping template as numbered parameters, starting with {1}. For example, assuming an input datum property watts, then:
Datum Source ID Source ID Pattern Mapping Result /power/main/power/{p}_mainwatts_main/power/1/power/(\\d+)${p}_s{1}watts_s1/power/2/power/(\\d+)${p}_s{1}watts_s2/solar/1/(\\w+)/(\\d+)${p}_{1}{2}watts_solar1
To help visualize property mapping with a more complete example, let's imagine we have some datum streams being collected and the most recent datum from each look like this:
/meter/1 /meter/2 /solar/1
{\n \"watts\": 3213,\n}
{\n \"watts\": -842,\n}
{\n \"watts\" : 4055,\n \"current\": 16.89583\n}
Here are some examples of how some source mapping expressions could be defined, including how multiple mappings can be used at once:
Source ID Patterns Mappings Result /(\\w+)/(\\d+){1}_{p}{2}
"},{"location":"users/datum-filters/op-mode/","title":"Operational Mode Datum Filter","text":"
The Operational Mode Datum Filter provides a way to evaluate expressions to toggle operational modes. When an expression evaluates to true the associated operational mode is activated. When an expression evaluates to false the associated operational mode is deactivated.
This filter is provided by the Standard Datum Filters plugin.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Expressions A list of expression configurations that are evaluated to toggle operational modes.
Use the + and - buttons to add/remove expression configurations.
Each expression configuration contains the following settings:
Setting Description Mode The operational mode to toggle. Expire Seconds If configured and greater than 0, the number of seconds after activating the operational mode to automatically deactivate it. If not configured or 0 then the operational mode will be deactivated when the expression evaluates to false. See below for more information. Property If configured, the datum property to store the expression result in. See below for more information. Property Type The datum property type to use if Property is configured. See below for more information. Expression The expression to evaluate. See below for more info. Expression Language The expression language to write Expression in."},{"location":"users/datum-filters/op-mode/#expire-setting","title":"Expire setting","text":"
When configured the expression will never deactivate the operational mode directly. When evaluating the given expression, if it evaluates to true the mode will be activated and configured to deactivate after this many seconds. If the operation mode was already active, the expiration will be extended by this many seconds.
This configuration can be thought of like a time out as used on motion-detecting lights: each time motion is detected the light is turned on (if not already on) and a timer set to turn the light off after so many seconds of no motion being detected.
Note that the operational modes service might actually deactivate the given mode a short time after the configured expiration.
A property does not have to be populated. If you provide a Property name to populate, the value of the datum property depends on property type configured:
Type Description Instantaneous The property value will be 1 or 0 based on true and false expression results. Status The property will be the expression result, so true or false. Tag A tag named as the configured property will be added if the expression is true, or removed if false."},{"location":"users/datum-filters/op-mode/#expressions","title":"Expressions","text":"
See the Expressions section for general expressions reference. The expression must evaluate to a boolean (true or false) result. When it evaluates to true the configured operational mode will be activated. When it evaluates to false the operational mode will be deactivated (unless an expire setting has been configured).
The root object is a datum samples expression object that lets you treat all datum properties, and filter parameters, as expression variables directly, along with the following properties:
Property Type Description datumGeneralNodeDatum A GeneralNodeDatum object, populated with data from all property and virtual meter configurations. propsMap<String,Object> Simple Map based access to the properties in datum, and transform parameters, to simplify expressions.
The following methods are available:
Function Arguments Result Description has(name)Stringboolean Returns true if a property named name is defined."},{"location":"users/datum-filters/op-mode/#expression-examples","title":"Expression examples","text":"
Assuming a datum sample with properties like the following:
Property Value current7.6voltage240.1statusError
Then here are some example expressions and the results they would produce:
Expression Result Comment voltage * current > 1800true Since voltage * current is 1824.76, the expression is true. status != 'Error'false Since status is Error the expression is false."},{"location":"users/datum-filters/parameter-expression/","title":"Parameter Expression Datum Filter","text":"
The Parameter Expression Datum Filter provides a way to generate filter parameters by evaluating expressions against existing properties. The generated parameters will be available to any further datum filters in the same filter chain.
Tip
Parameters are useful as temporary variables that you want to use during datum processing but do not want to include as datum properties that get posted to SolarNet.
This filter is provided by the Standard Datum Filters plugin.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Expressions A list of expression configurations that are evaluated to derive parameter values from other property values.
Use the + and - buttons to add/remove expression configurations.
Each expression configuration contains the following settings:
Setting Description Parameter The filter parameter name to store the expression result in. Expression The expression to evaluate. See below for more info. Expression Language The [expression language][expr-lang] to write Expression in."},{"location":"users/datum-filters/parameter-expression/#expressions","title":"Expressions","text":"
See the Expressions section for general expressions reference. This filter supports Datum Expressions that lets you treat all datum properties, and filter parameters, as expression variables directly.
"},{"location":"users/datum-filters/property/","title":"Property Datum Filter","text":"
The Property Datum Filter provides a way to remove properties of datum. This can help if some component generates properties that you don't actually need to use.
For example you might have a plugin that collects data from an AC power meter that capture power, energy, quality, and other properties each time a sample is taken. If you are only interested in capturing the power and energy properties you could use this component to remove all the others.
This component can also throttle individual properties over time, so that individual properties are posted less frequently than the rate the whole datum it is a part of is sampled at. For example a plugin for an AC power meter might collect datum once per minute, and you want to collect the energy properties of the datum every minute but the quality properties only once every 10 minutes.
The general idea for filtering properties is to configure rules that define which datum sources you want to filter, along with a list of properties to include and/or a list to exclude. All matching is done using regular expressions, which can help make your rules concise.
This filter is provided by the Standard Datum Filters plugin.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Property Includes A list of property names to include, removing all others. This is a list of case-insensitive patterns to match against datum property names. If any inclusion patterns are configured then only properties matching one of these patterns will be included in datum. Any property name that does not match one of these patterns will be removed. Property Excludes A list of property names to exclude. This is a list of case-insensitive patterns to match against datum property names. If any exclusion expressions are configured then any property that matches one of these expressions will be removed. Exclusion epxressions are processed after inclusion expressions when both are configured.
Use the + and - buttons to add/remove property include/exclude patterns.
Each property inclusion setting contains the following settings:
Setting Description Name The property name pattern to include. Limit Seconds A throttle limit, in seconds, to apply to included properties. The minimum number of seconds to limit properties that match the configured property inclusion pattern. If properties are produced faster than this rate, they will be filtered out. Leave empty (or 0) for no throttling."},{"location":"users/datum-filters/split/","title":"Split Datum Filter","text":"
The Split Datum Filter provides a way to split the properties of a datum stream into multiple new derived datum streams.
This filter is provided by the Standard Datum Filters plugin.
In the example screen shot shown above, the /power/meter/1 datum stream is split into two datum streams: /meter/1/power and /meter/1/energy. Properties with names containing current, voltage, or power (case-insensitive) will be copied to /meter/1/power. Properties with names containing hour (case-insensitive) will be copied to /meter/1/energy.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Swallow Input If enabled, then discard input datum after splitting. Otherwise leave the input datum as is. Property Source Mappings A list of property name regular expression with associated source IDs to copy matching properties to."},{"location":"users/datum-filters/split/#property-source-mappings-settings","title":"Property Source Mappings settings","text":"
Use the + and - buttons to add/remove Property Source Mapping configurations.
Each property source mapping configuration contains the following settings:
Setting Description Property A property name case-sensitive regular expression to match on the input datum stream. You can enable case-insensitive matching by including a (?i) prefix. Source ID The destination source ID to copy the matching properties to. Supports placeholders.
Tip
If multiple property name expressions match the same property name, that property will be copied to all the datum streams of the associated source IDs.
"},{"location":"users/datum-filters/tariff/","title":"Time-based Tariff Datum Filter","text":"
The Tariff Datum Filter provides a way to inject time-based tariff rates based on a flexible tariff schedule defined with various time constraints.
This filter is provided by the Tariff Filter plugin.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Metadata Service The Service Name of the Metadata Service to obtain the tariff schedule from. See below for more information. Metadata Path The metadata path that will resolve the tariff schedule from the configured Metadata Service. Language An IETF BCP 47 language tag to parse the tariff data with. If not configured then the default system language will be assumed. First Match If enabled, then apply only the first tariff that matches a given datum date. If disabled, then apply all tariffs that match. Schedule Cache The amount of seconds to cache the tariff schedule obtained from the configured Metadata Service. Tariff Evaluator The Service Name of a Time-based Tariff Evaluator service to evaluate each tariff to determine if it should apply to a given datum. If not configured a default algorithm is used that matches all non-empty constraints in an inclusive manner, except for the time-of-day constraint which uses an exclusive upper bound."},{"location":"users/datum-filters/tariff/#metadata-service","title":"Metadata Service","text":"
SolarNode provides a User Metadata Service component that this filter can use for the Metadata Service setting. This allows you to configure the tariff schedule as user metadata in SolarNetwork and then SolarNode will download the schedule and use it as needed.
You must configure a SolarNetwork security token to use the User Metadata Service. We recommend that you create a Data security token in SolarNetwork with a limited security policy that includes an API Path of just /users/meta and a User Metadata Path of something granular like /pm/tariffs/**. This will give SolarNode access to just the tariff metadata under the /pm/tariffs metadata path.
The SolarNetwork API Explorer can be used to add the necessary tariff schedule metadata to your account. For example:
The tariff schedule obtained from the configured Metadata Service uses a simple CSV-based format that can be easily exported from a spreadsheet. Each row represents a rule that includes:
a set of time constraints that must be satisfied for the rule to be applied
a list of tariff rates to be added to datum when the constraints are satisfied
Include a header row
A header row is required because the tariff rate names are defined there. The first 4 column names are ignored.
The schedule consists of 4 time constraint columns followed by one or more tariff rate columns. Each constraint is represented as a range, in the form start - end. Whitespace is allowed around the - character. If the start and end are the same, the range may be shortened to just start. A range can be left empty to represent all values. The time constraint columns are:
Column Constraint Description 1 Month range An inclusive month range. Months can be specified as numbers (1-12) or abbreviations (Jan-Dec) or full names (January - December). When using text names case does not matter and they will be parsed using the Lanauage setting. 2 Day range An inclusive day-of-month range. Days are specified as numbers (1-31). 3 Weekday range An inclusive day-of-week range. Weekdays can be specified as numbers (1-7) with Monday being 1 and Sunday being 7, or abbreviations (Mon-Sun) or full names (Monday - Sunday). When using text names case does not matter and they will be parsed using the Lanauage setting. 4 Time range An inclusive - exclusive time-of-day range. The time can be specified as whole hour numbers (0-24) or HH:MM style (00:00 - 24:00).
Starting on column 5 of the tariff schedule are arbitrary rate values to add to datum when the corresponding constraints are satisfied. The name of the datum property is derived from the header row of the column, adapted according to the following rules:
change to lower case
replacing any runs of non-alphanumeric or underscore with a single underscore
removing any leading/trailing underscores
Here are some examples of the header name to the equivalent property name:
Rate Header Name Datum Property Name TOU tou Foo Bar foo_bar This Isn't A Great Name! this_isn_t_a_great_name"},{"location":"users/datum-filters/tariff/#example-schedule","title":"Example schedule","text":"
Here's an example schedule with 4 rules and a single TOU rate (the * stands for all values):
Rule Month Day Weekday Time TOU 1 Jan-Dec * Mon-Fri 0-8 10.48 2 Jan-Dec * Mon-Fri 8-24 11.00 3 Jan-Dec * Sat-Sun 0-8 9.19 4 Jan-Dec * Sat-Sun 8-24 11.21
"},{"location":"users/datum-filters/throttle/","title":"Throttle Datum Filter","text":"
The Throttle Datum Filter provides a way to throttle entire datum over time, so that they are posted to SolarNetwork less frequently than a plugin that collects the data produces them. This can be useful if you need a plugin to collect data at a high frequency for use internally by SolarNode but don't need to save such high resolution of data in SolarNetwork. For example, a plugin that monitors a device and responds quickly to changes in the data might be configured to sample data every second, but you only want to capture that data once per minute in SolarNetwork.
The general idea for filtering datum is to configure rules that define which datum sources you want to filter, along with time limit to throttle matching datum by. Any datum matching the sources that are captured faster than the time limit will filtered and not uploaded to SolarNetwork.
This filter is provided by the Standard Datum Filters plugin.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Limit Seconds A throttle limit, in seconds, to apply to matching datum. The throttle limit is applied to datum by source ID. Before each datum is uploaded to SolarNetwork, the filter will check how long has elapsed since a datum with the same source ID was uploaded. If the elapsed time is less than the configured limit, the datum will not be uploaded."},{"location":"users/datum-filters/unchanged-property/","title":"Unchanged Property Filter","text":"
The Unchanged Property Filter provides a way to discard individual datum properties that have not changed within a datum stream.
This filter is provided by the Standard Datum Filters plugin.
Tip
See the Unchanged Datum Filter for a filter that can discard entire unchanging datum (at the source ID level).
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Default Unchanged Max Seconds When greater than 0 then the maximum number of seconds to discard unchanged properties within a single datum stream (source ID). Use this setting to ensure a property is included occasionally, even if the property value has not changed. Having at least one value per hour in a datum stream is recommended. This time period is always relative to the last unfiltered property within a given datum stream seen by the filter. Property Configurations A list of property settings."},{"location":"users/datum-filters/unchanged-property/#property-settings","title":"Property Settings","text":"
Use the + and - buttons to add/remove Property configurations.
Each property source mapping configuration contains the following settings:
Setting Description Property A regular expression pattern to match against datum property names. All matching properties will be filtered. Unchanged Max Seconds When greater than 0 then the maximum number of seconds to discard unchanged properties within a single datum stream (source ID). This can be used to override the filter-wide Default Unchanged Max Seconds setting, or left blank to use the default value."},{"location":"users/datum-filters/unchanged/","title":"Unchanged Datum Filter","text":"
The Unchanged Datum Filter provides a way to discard entire datum that have not changed within a datum stream.
This filter is provided by the Standard Datum Filters plugin.
Tip
See the Unchanged Property Filter for a filter that can discard individual unchanging properties within a datum stream.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Unchanged Max Seconds When greater than 0 then the maximum number of seconds to refrain from publishing an unchanged datum within a single datum stream. Use this setting to ensure a datum is included occasionally, even if the datum properties have not changed. Having at least one value per hour in a datum stream is recommended. This time period is always relative to the last unfiltered property within a given datum stream seen by the filter. Property Pattern A property name pattern that limits the properties monitored for changes. Only property names that match this expression will be considered when determining if a datum differs from the previous datum within the datum stream."},{"location":"users/datum-filters/virtual-meter/","title":"Virtual Meter Datum Filter","text":"
The Virtual Meter Datum Filter provides a way to derive an accumulating \"meter reading\" value out of an instantaneous property value over time. For example, if you have an irradiance sensor that allows you to capture instantaneous W/m2 power values, you could configure a virtual meter to generate Wh/m2 energy values.
Each virtual meter works with a single input datum property, typically an instantaneous property. The derived accumulating datum property will be named after that property with the time unit suffix appended. For example, an instantaneous irradiance property using the Hours time unit would result in an accumulating irradianceHours property. The value is calculated as an average between the current and the previous instantaneous property values, multiplied by the amount of time that has elapsed between the two samples.
This filter is provided by the Standard Datum Filters plugin.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with ! to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a , delimiter, in which case at least one of the configured tags must match to apply the filter. Virtual Meters Configure as many virtual meters as you like, using the + and - buttons to add/remove meter configurations."},{"location":"users/datum-filters/virtual-meter/#virtual-meter-settings","title":"Virtual Meter Settings","text":"
The Virtual Meter settings define a single virutal meter.
Setting Description Property The name of the input datum property to derive the virtual meter values from. Property Type The type of the input datum property. Typically this will be Instantaneous but when combined with an expression an Accumulating property can be used. Reading Property The name of the output meter accumulating datum property to generate. Leave empty for a default name derived from Property and Time Unit. For example, an instantaneous irradiance property using the Hours time unit would result in an accumulating irradianceHours property. Time Unit The time unit to record meter readings as. This value affects the name of the virtual meter reading property if Reading Property is left blank: it will be appended to the end of Property Name. It also affects the virtual meter output reading values, as they will be calculated in this time unit. Max Age The maximum time allowed between samples where the meter reading can advance. In case the node is not collecting samples for a period of time, this setting prevents the plugin from calculating an unexpectedly large reading value jump. For example if a node was turned off for a day, the first sample it captures when turned back on would otherwise advance the reading as if the associated instantaneous property had been active over that entire time. With this restriction, the node will record the new sample date and value, but not advance the meter reading until another sample is captured within this time period. Decimal Scale A maximum number of digits after the decimal point to round to. Set to 0 to round to whole numbers. Track Only On Change When enabled, then only update the previous reading date if the new reading value differs from the previous one. Rolling Average Count A count of samples to average the property value from. When set to something greater than 1, then apply a rolling average of this many property samples and output that value as the instantaneous source property value. This has the effect of smoothing the instantaneous values to an average over the time period leading into each output sample. Defaults to 0 so no rolling average is applied. Add Instantaneous Difference When enabled, then include an output instantaneous property of the difference between the current and previous reading values. Instantaneous Difference Property The derived output instantaneous datum property name to use when Add Instantaneous Difference is enabled. By default this property will be derived from the Reading Property value with Diff appended. Reading Value You can reset the virtual meter reading value with this setting. Note this is an advanced operation. If you submit a value for this setting, the virtual meter reading will be reset to this value such that the next datum the reading is calculated for will use this as the current meter reading. This will impact the datum stream's reported aggregate values, so you should be very sure this is something you want to do. For example if the virtual meter was at 1000 and you reset it 0 then that will appear as a -1000 drop in whatever the reading is measuring. If this occurs you can create a Reset Datum auxiliary record to accomodate the reset value. Expressions Configure as many expressions as you like, using the + and - buttons to add/remove expression configurations."},{"location":"users/datum-filters/virtual-meter/#virtual-meter-expression-settings","title":"Virtual Meter Expression Settings","text":"
A virtual meter can use expressions to customise how the output meter value reading value is calculated. See the Expressions section for more information.
Setting Description Property The datum property to store the expression result in. This must match the Reading Property of a meter configuration. Keep in mind that if Reading Property is blank, the implied value is derived from Property and Time Unit. Property Type The datum property type to use. Expression The expression to evaluate. See below for more info. Expression Language The expression language to write Expression in."},{"location":"users/datum-filters/virtual-meter/#filter-parameters","title":"Filter parameters","text":"
When the virtual meter filter is applied to a given datum, it will generate the following filter parameters, which will be available to other filters that are applied to the same datum after this filter.
Parameter Description {inputPropertyName}_diff The difference between the current input property value and the previous input property value. The {inputPropertyName} part of the parameter name will be replaced by the actual input property name. For example irradiance_diff. {meterPropertyName}_diff The difference between the current output meter property value and the previous output meter property value. The {meterPropertyName} part of the parameter name will be replaced by the actual output meter property name. For example irradianceHours_diff."},{"location":"users/datum-filters/virtual-meter/#expressions","title":"Expressions","text":"
Expressions can be configured to calculate the output meter datum property, instead of using the default averaging algorithm. If an expression configuration exists with a Property that matches a configured (or implied) meter configuration Reading Property, then the expression will be invoked to generate the new meter reading value. See the Expressions guide for general expression language reference.
Warning
It is important to remember that the expression must calculate the next meter reading value. Typically this means it will calculate some differential value based on the amount of time that has elapsed and add that to the previous meter reading value.
The root object is a virtual meter expression object that lets you treat all datum properties, and filter parameters, as expression variables directly, along with the following properties:
Property Type Description configVirtualMeterConfig A VirtualMeterConfig object for the virtual meter configuration the expression is evaluating for. datumGeneralNodeDatum A Datum object, populated with data from all property and virtual meter configurations. propsMap<String,Object> Simple Map based access to the properties in datum, and transform parameters, to simplify expressions. currDatelong The current datum timestamp, as a millisecond epoch number. prevDatelong The previous datum timestamp, as a millisecond epoch number. timeUnitsdecimal A decimal number of the difference between currDate and prevDate in the virtual meter configuration's Time Unit, rounded to at most 12 decimal digits. currInputdecimal The current input property value. prevInputdecimal The previous input property value. inputDiffdecimal The difference between the currInput and prevInput values. prevReadingdecimal The previous output meter property value.
The following methods are available:
Function Arguments Result Description has(name)Stringboolean Returns true if a property named name is defined. timeUnits(scale)intdecimal Like the timeUnits property but rounded to a specific number of decimal digits."},{"location":"users/datum-filters/virtual-meter/#expression-example-time-of-use-tariff-reading","title":"Expression example: time of use tariff reading","text":"
Iagine you'd like to track a time-of-use cost associated with the energy readings captured by an energy meter. The Time-based Tariff Datum Filter filter could be used to add a tou property to each datum, and then a virtual meter expression can be used to calculate a cost reading property. The cost property will be an accumulating property like any meter reading, so when SolarNetwork aggregates its value over time you will see the effective cost over each aggregate time period.
Here is a screen shot of the settings used for this scenario (note how the Reading Property value matches the Expression Property value):
The important settings to note are:
Setting Notes Virtual Meter - Property The input datum property is set to wattHours because we want to track changes in this property over time. Virtual Meter - Property Type We use Accumulating here because that is the type of property wattHours is. Virtual Meter - Reading Property The output reading property name. This must match the Expression - Property setting. Expression - Property This must match the Virtual Meter - Reading Property we want to evaluate the expression for. Expression - Property Type Typically this should be Accumulating since we are generating a meter reading style property. Expression - Expression The expression to evaluate. This expression looks for the tou property and when found the meter reading is incremented by the difference between the current and previous input wattHours property values multiplied by tou. If tou is not available, then the previous meter reading value is returned (leaving the reading unchanged).
Assuming a datum sample with properties like the following:
Property Value tou11.00currDate1621380669005prevDate1621380609005timeUnits0.016666666667currInput6095574prevInput6095462inputDiff112prevReading1022.782
Then here are some example expressions and the results they would produce:
Expression Result Comment inputDiff / 10000.112 Convert the input Wh property difference to kWh. inputDiff / 1000 * tou1.232 Multiply the input kWh by the the $/kWh tariff value to calculate the cost for the elapsed time period. prevReading + (inputDiff / 1000 * tou)1,024.014 Add the additional cost to the previous meter reading value to reach the new meter value."},{"location":"users/setup-app/","title":"Setup App","text":"
The SolarNode Setup App allows you to manage SolarNode through a web browser.
To access the Setup App, you need to know the network address of your SolarNode. In many cases you can try accessing http://solarnode/. If that does not work, you need to find the network address SolarNode is using.
Here is an example screen shot of the SolarNode Setup App:
You must log in to SolarNode to access its functions. The login credentials will have been created when you first set up SolarNode and associated it with your SolarNetwork account. The default Username will be your SolarNetwork account email address, and the password will have been randomly generated and shown to you.
Tip
You can change your SolarNode username and password after logging in. Note these credentials are not related, or tied to, your SolarNetwork login credentials.
The profile menu in the top-right of the Setup App menu give you access to change you password, change you username, logout, restart, and reset SolarNode.
Tip
Your SolarNode credentials are not related, or tied to, your SolarNetwork login credentials. Changing your SolarNode username or password does not change your SolarNetwork credentials.
Choosing the Change Password menu item will take you to a form for changing your password. Fill in your current password and then your new password, then click the Submit Password button.
The Change Password form
As a result, you will stay on the same page, but a success (or error) message will be shown above the form:
Choosing the Change Username menu item will take you to a form for changing your SolarNode username. Fill in your current password and then your new password, then click the Change Username button.
The Change Username form
As a result, you will stay on the same page, but a success (or error) message will be shown above the form:
You can either restart or reboot SolarNode from the Restart SolarNode menu. A restart means the SolarNode app will restart, while a reboot means the entire SolarNodeOS device will shut down and boot up again (restarting SolarNode along the way).
You might need to restart SolarNode to pick up new plugins you've installed, and you might need to reboot SolarNode if you've attached new sensors or other devices that require operating system support.
You can perform a \"factory reset\" of SolarNode to remove all your custom settings, certificate, login credentials, and so on. You also have the option to preserve some SolarNodeOS settings like WiFi credentials if you like.
The Settings Backup & Restore section provides a way to manage Settings Files and Settings Resources, both of which are backups for the configured settings in SolarNode.
Warning
Settings Files and Settings Resources do not include the node's certificate, login credentials, or custom plugins. See the Full Backup & Restore section for managing \"full\" backups that do include those items.
The Export button allows you to download a Settings File with the currently active configuration.
The Import button allows you to upload a previously-downloaded Settings File.
The Settings Resource menu allows you to download specialized settings files, offered by some components in SolarNode. For example the Modbus Device Datum Source plugin offers a specialized CSV file format to make configuring those components easier.
The Auto backups area will have a list of links, each of which will let you download a Settings File that SolarNode automatically created. Each link shows you the date the settings backup was created.
The Full Backup & Restore section lets you manage SolarNode \"full\" backups. Each full backup contains a snapshot of the settings you have configured, the node's certificate, login credentials, custom plugins, and more.
The Backup Service shows a list of the available Backup Services. Each service has its own settings that must be configured for the service to operate. After changing any of the selected service's settings, click the Save Settings button to save those changes.
The Backup button allows you to create a new backup.
The Backups menu allows you to download or restore any available backup.
The Import button allows you to upload a previously downloaded backup file.
SolarNode supports configurable Backup Service plugins to manage the storage of backup resources.
"},{"location":"users/setup-app/settings/backups/#file-system-backup-service","title":"File System Backup Service","text":"
The File System Backup Service is the default Backup Service provided by SolarNode. It saves the backup onto the node itself. In order to be able to restore your settings if the node is damaged or lost, you must download a copy of a backup using the Download button, and save the file to a safe place.
Warning
If you do not download a copy of a backup, you run the risk of losing your settings and node certificate, making it impossible to restore the node in the event of a catastrophic hardware failure.
The configurable settings of the File System Backup Service are:
Setting Description Backup Directory The folder (on the node) where the backups will be saved. Copies The number of backup copies to keep, before deleting the oldest backup."},{"location":"users/setup-app/settings/backups/#s3-backup-service","title":"S3 Backup Service","text":"
The S3 Backup Service creates cloud-based backups in AWS S3 (or any compatible provider). You must configure the credentials and S3 location details to use before any backups can be created.
Note
The S3 Backup Service requires the S3 Backup Service Plugin.
The configurable settings of the S3 Backup Service are:
Setting Description AWS Token The AWS access token to authenticate with. AWS Secret The AWS access token secret to authenticate with. AWS Region The name of the Amazon region to use, for example us-west-2. S3 Bucket The name of the S3 bucket to use. S3 Path An optional root path to use for all backup data (typically a folder location). Storage Class A supported storage class, such as STANDARD (the default), STANDARD_IA, INTELLIGENT_TIERING, REDUCED_REDUNDANCY, and so on. Copies The number of backup copies to keep, before deleting the oldest backup. Cache Seconds The amount of time to cache backup metadata such as the list of available backups, in seconds."},{"location":"users/setup-app/settings/components/","title":"Components","text":"
The Components page lists all the configurable multi-instance components available on your SolarNode. Multi-instance means you can configure any number of a given component, each with their own settings.
For example imagine you want to collect data from a power meter, solar inverter, and weather station, all of which use the Modbus protocol. To do that you would configure three instances of the Modbus Device component, one for each device.
Use the Manage button for any listed compoennt to add or remove instances of that component.
An instance count badge appears next to any component with at least one instance configured.
The component management page is shown when you click the Manage button for a multi-instance component. Each component instance's settings are independent, allowing you to integrate with multiple copies of a device or service.
For example if you connected a Modbus power meter and a Modbus solar inverter to a node, you would create two Modbus Device component instances, and configure them with settings appropriate for each device.
The component management screen allows you to add, update, and remove component instances.
"},{"location":"users/setup-app/settings/components/#add-new-instance","title":"Add new instance","text":"
Add new component instances by clicking the Add new X button in the top-right, where X is the name of the component you are managing. You will be given the opportunity to assign a unique identifier to the new component instance:
When creating a new component instance you can provide a short name to identify it with.
When you add more than one component instance, the identifiers appear as clickable buttons that allow you to switch between the setting forms for each component.
Component instance buttons let you switch between each component instance.
Each setting will include a button that will show you a brief description of that setting.
Click for brief setting information.
After making any change, an Active value label will appear, showing the currently active value for that setting.
After making changes to any component instance's settings, click the Save All Changes button in the top-left to commit those changes.
Save All Changes works across all component instances
You can safely switch between and make changes on multiple component instance settings before clicking the Save All Changes button: your changes across all instances will be saved.
"},{"location":"users/setup-app/settings/components/#remove-or-reset-instances","title":"Remove or reset instances","text":"
At the bottom of each component instance are buttons that let you delete or reset that component intance.
Buttons to delete or reset component instance.
The Delete button will remove that component instance from appearing, however the settings associated with that instance are preserved. If you re-add an instance with the same identifier then the previous settings will be restored. You can think of the Delete button as disabling the component, giving you the option to \"undo\" the deletion if you like.
The Restore button will reset the component to its factory defaults, removing any settings you have customized on that instance. The instance remains visible and you can re-configure the settings as needed.
"},{"location":"users/setup-app/settings/components/#remove-all-instances","title":"Remove all instances","text":"
The Remove all button in the top-right of the page allows you to remove all component instances, including any customized settings on those instances.
Warning
The Remove all action will delete all your customized settings for all the component instances you are managing. When finished it will be as if you never configured this component before.
Remove all instances with the \"Remove all\" button.
You will be asked to confirm removing all instances:
Datum Filters are services that manipulate datum generated by SolarNode plugins before they are uploaded to SolarNet. See the general Datum Filters section for more information about how datum filters work and what they are used for.
"},{"location":"users/setup-app/settings/datum-filters/#global-datum-filters","title":"Global Datum Filters","text":"
Global Datum Filters are applied to datum just before posting to SolarNetwork. Once an instance is created, it is automatically active and will be applied to datum. This differs from User Datum Filters, which must be explicitly added to a service to be used, either dircectly or indirectly with a Datum Filter Chain.
Click the Manage button next to any Global Datum Filter component to create, update, and remove instances of that filter.
All datum generated by SolarNode plugins are added to the Datum Queue for processing. The datum are processed in the order they are added to the queue. Datum Filters are applied to each datum, each filter's result passed to the next available filter until all filters have been applied.
The Datum Queue section of the Datum Filters page shows you some processing statistics and has a couple of settings you can change:
Setting Description Delay The minimum amount of time to delay processing datum after they have been added to the queue, in milliseconds. A small amount of delay allows parallel datum collection to get processed more reliably in time-based order. The default is 200 ms and usually does not need to be changed. Datum Filter The Service Name of a Datum Filter component to process datum with. See below for more information.
The Datum Filter setting allows you to configure a single Datum Filter to apply to every datum captured in SolarNode. Since you can only configure one filter, it is very common to configure a Datum Filter Chain, where you can then configure any number of other filters to apply.
"},{"location":"users/setup-app/settings/datum-filters/#global-datum-filter-chain","title":"Global Datum Filter Chain","text":"
The Global Datum Filter Chain provides a way to apply explicit User Datum Filters to datum just before posting to SolarNetwork.
Setting Description Active Global Filters A read-only list of any created Global Datum Filter component Service Name values. These filters are automatically applied, without needing to explicitly reference them in the Datum Filters list. Available User Filters A read-only list of Service Name values of User Datum Filter components that have been configured. You can copy any value from this list and paste it into the Datum Filters list to activate that filter. Datum Filters The list of Service Name values of User Datum Filter components to apply to datum."},{"location":"users/setup-app/settings/datum-filters/#user-datum-filters","title":"User Datum Filters","text":"
User Datum Filters are not applied automatically: they must be explicitly added to a service to be used, either dircectly or indirectly with a Datum Filter Chain. This differs from Global Datum Filters which are automatically applied to datum just before being uploaded to SolarNet.
Click the Manage button next to any User Datum Filter component to create, update, and remove instances of that filter.
The SolarNode UI supports configuring logger levels dynamically, without having to change the logging configuration file.
Warning
When SolarNode restarts all changes made in the Logger UI will be lost and the logger configuration will revert to whatever is configured in the logging configuration file.
The Logging page lists all the configured logger levels and lets you add new loggers and edit the existing ones using a simple form.
The SolarNode UI will show the list of active Operational Modes on the Settings > Operational Modes page. Click the + button to activate modes, and the button to deactivate an active mode.
The main Settings page also shows a read-only view of the active modes:
SolarNode includes a Command Console page where troubleshooting commands from supporting plugins are displayed. The page shows a list of available command topics and lets you toggle the inclusion of each topic's commands at the bottom of the page.
The Modbus TCP Connection and Modbus Serial Connection components support publishing mbpoll commands under a modbus command topic. The mbpoll utility is included in SolarNodeOS; if not already installed you can install it by logging in to the SolarNodeOS shell and running the following command:
sudo apt install mbpoll\n
Modbus command logging must be enabled on each Modbus Connection component by toggling the CLI Publishing setting on.
Once CLI Publishing has been enabled, every Modbus request made on that connection will generate an equivalent mbpoll command, and those commands will be shown on the Command Console.
You can copy any logged command and paste that into a SolarNodeOS shell to execute the Modbus request and see the results.
SolarNode runs on SolarNodeOS, a Debian Linux-based operating system. If you are already familiar with Debian Linux, or one of the other Linux distributions built from Debian like Ubuntu Linux, you will find it pretty easy to get around in SolarNodeOS.
"},{"location":"users/sysadmin/#system-user-account","title":"System User Account","text":"
SolarNodeOS ships with a solar user account that you can use to log into the operating system. The default password is solar but may have been changed by a system administrator.
Warning
The solar user account is not related to the account you log into the SolarNode Setup App with.
"},{"location":"users/sysadmin/#change-system-user-account-password","title":"Change system user account password","text":"
To change the system user account's password, use the passwd command.
Changing the system user account password
$ passwd\nChanging password for solar.\nCurrent password:\nNew password:\nRetype new password:\npasswd: password updated successfully\n
Tip
Changing the solar user's password is highly recommended when you first deploy a node.
Some commands require administrative permission. The solar user can execute arbitrary commands with administrative permission by prefixing the command with sudo. For example the reboot command will reboot SolarNodeOS, but requires administrative permission.
Run a command as a system administrator
$ sudo reboot\n
The sudo command will prompt you for the solar user's password and then execute the given command as the administrator user root.
The solar user can also become the root administrator user by way of the su command:
Gain system administrative privledges with su
$ sudo su -\n
Once you have become the root user you no longer need to use the sudo command, as you already have administrative permissions.
"},{"location":"users/sysadmin/#network-access-with-ssh","title":"Network Access with SSH","text":"
SolarNodeOS comes with an SSH service active, which allows you to remotely connect and access the command line, using any SSH client.
"},{"location":"users/sysadmin/date-time/","title":"Date and Time","text":"
SolarNodeOS includes date and time management functions through the timedatectl command. Run timedatectl status to view information about the current date and time settings.
Viewing the current date and time settings
$ timedatectl status\n Local time: Fri 2023-05-26 03:41:42 BST\n Universal time: Fri 2023-05-26 02:41:42 UTC\n RTC time: n/a\n Time zone: Europe/London (BST, +0100)\nSystem clock synchronized: yes\n NTP service: active\n RTC in local TZ: no\n
"},{"location":"users/sysadmin/date-time/#changing-the-local-time-zone","title":"Changing the local time zone","text":"
SolarNodeOS uses the UTC time zone by default. If you would like to change this, use the timedatectl set-timezone
You can list the available time zone names by running timedatectl list-timezones.
"},{"location":"users/sysadmin/date-time/#internet-time-synchronization","title":"Internet time synchronization","text":"
SolarNodeOS uses the systemd-timesyncd service to synchronize the node's clock with internet time servers. Normally no configuration is necessary. You can check the status of the network time synchronization with timedatectl like:
$ timedatectl status\n Local time: Fri 2023-05-26 03:41:42 BST\n Universal time: Fri 2023-05-26 02:41:42 UTC\n RTC time: n/a\n Time zone: Europe/London (BST, +0100)\nSystem clock synchronized: yes\n NTP service: active\n RTC in local TZ: no\n
Warning
For internet time synchronization to work, SolarNode needs to access Network Time Protocol (NTP) servers, using UDP over port 123.
"},{"location":"users/sysadmin/date-time/#network-time-server-configuration","title":"Network time server configuration","text":"
The NTP servers that SolarNodeOS uses are configured in the /etc/systemd/timesyncd.conf file. The default configuration uses a pool of Debian servers, which should be suitable for most nodes. If you would like to change the configuration, edit the timesyncd.conf file and change the NTP= line, for example
Configuring the NTP servers to use
[Time]\nNTP=my.ntp.example.com\n
"},{"location":"users/sysadmin/date-time/#setting-the-date-and-time","title":"Setting the date and time","text":"
In order to manually set the date and time, NTP time synchronization must be disabled with timedatectl set-ntp false. Then you can run timedatectl set-time to set the date:
SolarNodeOS uses the systemd-networkd service to manage network devices and their settings. A network device relates to a physical network hardware device or a software networking component, as recognized and named by the operating system. For example, the first available ethernet device is typically named eth0 and the first available WiFi device wlan0.
Network configuration is stored in .network files in the /etc/systemd/network directory. SolarNodeOS comes with default support for ethernet and WiFi network devices.
The default 10-eth.network file configures the default ethernet network eth0 to use DHCP to automatically obtain a network address, routing information, and DNS servers to use.
SolarNodeOS networks are configured to use DHCP by default. If you need to re-configure a network to use DHCP, change the configuration to look like this:
If you need to use a static network address, instead of DHCP, edit the network configuration file (for example, the 10-eth.network file for the ethernet network), and change it to look like this:
Ethernet network with static address configuration
Use Name, DNS, Address, and Gateway values specific to your network. The same static configuration for a single address can also be specified in a slightly more condensed form, moving everything into the [Network] section:
Ethernet network with condensed single static address configuration
The default 20-wlan.network file configures the default WiFi network wlan0 to use DHCP to automatically obtain a network address, routing information, and DNS servers to use. To configure the WiFi network SolarNode should connect to, run this command:
Configuring the SolarNode WiFi network
sudo dpkg-reconfigure sn-wifi\n
You will then be prompted to supply the following WiFi settings:
Country code, e.g. NZ
WiFi network name (SSID)
WiFi network password
Note about WiFi support
WiFi support is provided by the sn-wifi package, which may not be installed. See the Package Maintenance section for information about installing packages.
"},{"location":"users/sysadmin/networking/#wifi-auto-access-point-mode","title":"WiFi Auto Access Point mode","text":"
For initial setup of a the WiFi settings on a SolarNode it can be helpful for SolarNode to create its own WiFi network, as an access point. The sn-wifi-autoap@wlan0 service can be used for this. When enabled, it will monitor the WiFi network status, and when the WiFi connection fails for any reason it will enable a SolarNode WiFi network using a gateway IP address of 192.168.16.1. Thus when the SolarNode access point is enabled, you can connect to that network from your own device and reach the Setup App at http://192.168.16.1/ or the command line via ssh solar@192.168.16.1.
The default 21-wlan-ap.network file configures the default WiFi network wlan0 to act as an Access Point
This service is not enabled by default. To enable it, run the following:
Once enabled, if SolarNode cannot connect to the configured WiFi network, it will create its own SolarNode network. By default the password for this network is solarnode. The Access Point network configuration is defined in the /etc/network/wpa_supplicant-wlan0.conf file, in a section like this:
SolarNodeOS uses the nftables system to provide an IP firewall to SolarNode. By default only the following incoming TCP ports are allowed:
Port Description 22 SSH access 80 HTTP SolarNode UI 8080 HTTP SolarNode UI alternate port"},{"location":"users/sysadmin/networking/#open-additional-ip-ports","title":"Open additional IP ports","text":"
You can edit the /etc/nftables.conf file to add additional open IP ports as needed. A good place to insert new rules is after the lines that open ports 80 and 8080:
SolarNodeOS supports a wide variety of software packages. You can install new packages as well as apply package updates as they become available. The apt command performs these tasks.
For SolarNodeOS to know what packages, or package updates, are available, you need to periodically update the available package information. This is done with the apt update command:
Update package information
sudo apt update # (1)!\n
The sudo command runs other commands with administrative privledges. It will prompt you for your user account password (typically the solar user).
To see if there are any package updates available, run apt list like this:
List packages with updates available
apt list --upgradable\n
If there are updates available, that will show them. You can apply all package updates with the apt upgrade command, like this:
Upgrade all packages
sudo apt upgrade\n
If you want to install an update for a specific package, use the apt install command instead.
Tip
The apt upgrade command will update existing packages and install packages that are required by those packages, but it will never remove an existing package. Sometimes you will want to allow packages to be removed during the upgrade process; to do that use the apt full-upgrade command.
"},{"location":"users/sysadmin/packages/#search-for-packages","title":"Search for packages","text":"
Use the apt search command to search for packages. By default this will match package names and their descriptions. You can search just for package names by including a --names-only argument.
Search for packages
# search for \"name\" across package names and descriptions\napt search name\n\n# search for \"name\" across package names only\napt search --names-only name\n\n# multiple search terms are logically \"and\"-ed together\napt search name1 name2\n
You can remove packages with the apt remove command. That command will preserve any system configuration associated with the package(s); if you would like to also remove that you can use the apt purge command.
Removing packages
sudo apt remove mypackage\n\n# use `purge` to also remove configuration\nsudo apt purge mypackage\n
SolarNode is managed as a systemd service. There are some shortcut commands to more easily manage the service.
Command Description sn-start Start the SolarNode service. sn-restart Restart the SolarNode service. sn-status View status information about the SolarNode service (see if it is running or not). sn-stop Stop the SolarNode service.
The sn-stop command requires administrative permissions, so you may be prompted for your system account password (usually the solar user's password).
"},{"location":"users/sysadmin/solarnode-service/#solarnode-service-environment","title":"SolarNode service environment","text":"
You can modify the environment variables passed to the SolarNode service, as well as modify the Java runtime options used. You may want to do this, for example, to turn on Java remote debugging support or to give the SolarNode process more memory.
The systemd solarnode.service unit will load the /etc/solarnode/env.conf environment configuration file if it is present. You can define arbitrary environment variables using a simple key=value syntax.
SolarNodeOS ships with a /etc/solarnode/env.conf.example file you can use for reference.
Some SolarNode components can be configured from properties files. This type of configuration is
meant to be changed just once, when a SolarNode is first deployed, to alter some default
configuration value.
Imagine a component uses the configuration namespace com.example.service and supports a
configurable property named max-threads that accepts an integer value you would like to configure
as 4. You would create a com.example.service.cfg file like:
The Datum Filter Chain is a User Datum Filter that you configure with a list, or chain,
of other User Datum Filters. When the Filter Chain executes, it executes each of the configured
Datum Filters, in the order defined. This filter can be used like any other Datum Filter, allowing
@@ -663,7 +673,7 @@
The Control Updater Datum Filter provides a way to update controls with the result of an
expression, optionally populating the expression result as a datum property.
The Downsample Datum Filter provides a way to down-sample higher-frequency datum samples into
lower-frequency (averaged) datum samples. The filter will collect a configurable number of samples
and then generate a down-sampled sample where an average of each collected instantaneous property is
included. In addition minimum and maximum values of each averaged property are added.
See the SolarNode Expressions guide for general expressions reference. The root object
is a DatumExpressionRoot that lets you treat all datum properties, and
filter parameters, as expression variables directly, along with the following properties:
Datum Filters are services that manipulate datum generated by SolarNode plugins
before they are uploaded to SolarNet. Datum Filters vary wildly in the functionality they provide;
here are some examples of the things they can do:
@@ -669,7 +679,7 @@
Global Datum Filter Chain, just before uploading to SolarNet
All datum generated by SolarNode plugins are added to the Datum Queue for processing. The datum
are processed in the order they are added to the queue. Datum Filters are applied to each datum,
each filter's result passed to the next available filter until all filters have been applied.
Global Datum Filters are applied to datum just before posting to SolarNetwork. Once an instance is
created, it is automatically active and will be applied to datum. This differs from User Datum
Filters, which must be explicitly added to a service to be used, either
@@ -708,11 +718,11 @@
The Source ID pattern can define capture groups that will be provided to the Mapping template as numbered parameters, starting with {1}. For example, assuming an input datum property watts, then:
The Operational Mode Datum Filter provides a way to evaluate expressions to toggle
operational modes. When an expression evaluates to true the associated operational mode
is activated. When an expression evaluates to false the associated operational mode is
deactivated.
When configured the expression will never deactivate the operational mode directly. When
evaluating the given expression, if it evaluates to true the mode will be activated and configured
to deactivate after this many seconds. If the operation mode was already active, the expiration will
@@ -806,7 +816,7 @@
A property does not have to be populated. If you provide a Property name to populate, the value
of the datum property depends on property type configured:
See the Expressions section for general expressions reference. The expression
must evaluate to a boolean (true or false) result. When it evaluates to true the
configured operational mode will be activated. When it evaluates to false the operational mode
@@ -879,7 +889,7 @@
The Parameter Expression Datum Filter provides a way to generate filter parameters by
evaluating expressions against existing properties. The generated parameters will be available to
any further datum filters in the same filter chain.
See the Expressions section for general expressions reference. This filter
supports Datum Expressions that lets you treat all datum properties, and filter parameters, as
expression variables directly.
The Property Datum Filter provides a way to remove properties of datum. This can help if
some component generates properties that you don't actually need to use.
For example you might have a plugin that collects data from an AC power meter that capture power,
@@ -662,7 +672,7 @@
SolarNode provides a User Metadata Service component that this filter
can use for the Metadata Service setting. This allows you to configure the tariff schedule as user
metadata in SolarNetwork and then SolarNode will download the schedule and use it as needed.
The tariff schedule obtained from the configured Metadata Service uses a simple CSV-based format
that can be easily exported from a spreadsheet. Each row represents a rule that includes:
The Throttle Datum Filter provides a way to throttle entire datum over time, so that they
are posted to SolarNetwork less frequently than a plugin that collects the data produces them. This
can be useful if you need a plugin to collect data at a high frequency for use internally by
@@ -660,7 +670,7 @@
The Virtual Meter Datum Filter provides a way to derive an accumulating "meter reading" value
out of an instantaneous property value over time. For example, if you have an irradiance sensor that
allows you to capture instantaneous W/m2 power values, you could configure a virtual meter to
@@ -728,7 +738,7 @@
In SolarNetwork a datum is the fundamental time-stamped data structure collected by SolarNodes
and stored in SolarNet. It is a collection of properties associated with a specific information
source at a specific time.
@@ -737,7 +747,7 @@
A datum is uniquely identified by the three combined properties (nodeId, sourceId, created).
Source IDs are user-defined strings used to distinguish between different information sources within
a single node. For example, a node might collect data from an energy meter on source ID Meter and
a solar inverter on Solar. SolarNetwork does not place any restrictions on source ID values, other
@@ -756,7 +766,7 @@
security policies or datum queries. It is generally worthwhile
spending some time planning on a source ID taxonomy to use when starting a new project with
SolarNetwork.
-
The properties included in a datum object are known as datum samples. The samples are modeled as
a collection of named properties, for example the temperature and humidity properties in
the earlier example datum could be represented like this:
Many SolarNode components support a general "expressions" framework that can be used to calculate
values using a scripting language. SolarNode comes with the Spel scripting language by
default, so this guide describes that language.
Many SolarNode expressions are evaluated in the context of a datum, typically one captured
from a device SolarNode is collecting data from. In this context, the expression supports accessing
datum properties directly as expression variables, and some helpful functions are provided.
All datum properties with simple names can be referred to directly as variables. Here simple
just means a name that is also a legal variable name. The property
classifications do not matter in this context: the expression will look for
@@ -888,7 +898,7 @@
}
The expression can use the variables watts, wattHours, and mode.
The following functions deal with datum streams. The latest() and offset() functions give you
access to recently-captured datum from any SolarNode source, so you can refer to any datum stream
being generated in SolarNode. They return another datum expression root object, which means you have
@@ -1132,7 +1142,7 @@
All the Datum Metadata functions like metadataAtPath(path) can be invoked
directly, operating on the node's own metadata instead of a datum stream's metadata.
Building on the previous example datum, let's assume an earlier datum for the same source ID had
been collected with these properties (the classifications have been omitted for brevity):
Other datum stream histories collected by SolarNode can also be accessed via the
offset(source,offset) function. Let's assume SolarNode is collecting a datum stream for the source
ID solar, and had amassed the following history, in newest-to-oldest order:
First you submit the invitation in the acceptance form.
-
+
Next you preview the invitation details.
@@ -828,7 +838,7 @@
Note
The expected SolarNetwork Service value shown in this step will be in.solarnetwork.net.
-
+
Finally, confirm the invitation. This step contacts SolarNetwork and completes the
@@ -838,13 +848,13 @@
+
When these steps are completed, SolarNetwork will have assigned your SolarNode a unique
identifier known as your Node ID. A randomly generated SolarNode login password will have been
generated; you are given the opportunity to easily change that if you prefer.
Logging in SolarNode is configured in the /etc/solarnode/log4j2.xml file, which is in the log4j
configuration format. The default configuration in SolarNodeOS
sets the overall verbosity to INFO and logs to a temporary storage area
/run/solarnode/log/solarnode.log.
The Logger component outlined in the previous section allows a lot of flexibility to configure
what gets logged in SolarNode. Setting the level on a given namespace impacts that namespace as well
as all namespaces beneath it, meaning all other loggers that share the same namespace prefix.
The SolarNode UI supports configuring logger levels dynamically, without having to change the
logging configuration file. See the Setup App / Settings / Logging
page for more information.
The default SolarNode configuration automatically rotates log files based on size, and limits the
number of historic log files kept around, to that its associated storage space is not filled up.
When a log file reaches the file limit, it is renamed to include a -i.log suffix, where i is an
@@ -829,7 +839,7 @@
By default SolarNode logs to temporary (RAM) storage that is discarded when the node reboots. The
configuration can be changed so that logs are written directly to persistent storage if you would
like to have the logs persisted across reboots, or would like to preserve more log history than can
@@ -847,7 +857,7 @@
48.6 Logging example: split across multiple files¶
+
50.6 Logging example: split across multiple files¶
Sometimes it can be useful to turn on verbose logging for some area of SolarNode, but have those
messages go to a different file so they don't clog up the main solarnode.log file. This can be
done by configuring additional appender configurations.
The various <AppenderRef> elements configure the appender name to write the messages to.
The various additivity="false" attributes disable appender additivity which means the log
message will only be written to one appender, instead of being written to all
@@ -953,14 +963,14 @@
turns on buffered logging, which means log messages are buffered in RAM
before being flushed to disk. This is more forgiving to the disk, at the expense of a delay before
the messages appear.
-
MQTT wire logging means the raw MQTT packets send and received over MQTT connections will be logged
in an easy-to-read but very verbose format. For the MQTT wire logging to be enabled, it must be
activated with a special configuration file. Create the
/etc/solarnode/services/net.solarnetwork.common.mqtt.netty.cfg file with this content:
MQTT wire logs use a namespace prefix net.solarnetwork.mqtt. followed by the connection's host
name or IP address and port. For example SolarIn messages would use
net.solarnetwork.mqtt.queue.solarnetwork.net:8883 and SolarFlux messages would use
diff --git a/users/networking/index.html b/users/networking/index.html
index e23700c..aa2234c 100644
--- a/users/networking/index.html
+++ b/users/networking/index.html
@@ -174,8 +174,13 @@
SolarNode will attempt to automatically configure networking access from a local DHCP server. For
many deployments the local network router is the DHCP server. SolarNode will identify itself with
the name solarnode, so in many cases you can reach the SolarNode setup app at http://solarnode/.
Your local network router is very likely to have a record of SolarNode's network connection. Log
into the router's management UI and look for a device named solarnode.
If your SolarNode supports connecting a keyboard and screen, you can log into the SolarNode command line
console and run ip-braddr to print out a brief summary of the current networking configuration:
$ ip -br addr
@@ -711,7 +721,7 @@
Tip
You can get more details by running ipaddr (without the -br argument).
If your device will use WiFi for network access, you will need to configure the network name and credentials to use.
You can do that by creating a wpa_supplicant.conf file on the SolarNodeOS media (typically an SD card). For Raspberry Pi media, you can mount the SD card on your computer and it will mount the appropriate drive for you.
SolarNode supports a concept called operational modes. Modes are simple names like quiet and
hyper that can be either active or inactive. Any number of modes can be active at a given
time. In theory both quiet and hyper could be active simultaneously. Modes can be named anything
@@ -672,11 +682,11 @@
. You can also specify exactly ! to match only when no mode is active.
Datum Filters also make use of operational modes, to toggle filters
on and off dynamically.
Operational modes can be activated with an associated expiration date. The mode will remain
active until the expiration date, at which time it will be automatically deactivated. A mode can
always be manually deactivated before its associated expiration date.
SolarNode supports placeholders in some setting values, such as datum data source IDs. These allow
you to define a set of parameters that can be consistently applied to many settings.
For example, imagine you manage many SolarNode devices across different buildings or sites. You'd
@@ -678,7 +688,7 @@
Placeholders are written using the form {name:default} where name is the placeholder name and
default is an optional default value to apply if no placeholder value exists for the given name.
If a default value is not needed, omit the colon so the placeholder becomes just {name}.
SolarNode will look for placeholder values defined in properties files stored in the
conf/placeholders.d directory by default. In SolarNodeOS this is the
/etc/solarnode/placeholders.d directory.
SolarNode also supports storing placeholder values as Settings using the key
placeholder. The SolarUser /instruction/add API can be used with the
UpdateSetting topic to modify the placeholder values as needed. The type value is
diff --git a/users/remote-access/index.html b/users/remote-access/index.html
index 057ca9b..61624bd 100644
--- a/users/remote-access/index.html
+++ b/users/remote-access/index.html
@@ -174,8 +174,13 @@
SolarSSH is SolarNetwork's method of connecting to SolarNode devices over the internet even when
those devices are not directly reachable due to network firewalls or routing rules. It uses the
Secure Shell Protocol (SSH) to ensure your connection is private and secure.
Click the Connect button to initiate the SolarSSH connection process. You will be presented with
a dialog form to provide your SolarNodeOS system account
credentials. This is only necessary if you want to connect
@@ -844,7 +854,7 @@
Once connected, you can access the remote node's Setup App by clicking the Setup button in the
top-right corner of the window. This will open a new browser tab for the Setup App.
SolarSSH also supports a "direct" connection mode, that allows you to connect using standard ssh
client applications. This is a more advanced (and flexible) way of connecting to
your nodes, and even allows you to access other network services on the same network as the node
@@ -890,7 +900,7 @@
If you find yourself using SolarSSH connections frequently, a handy bash or zsh shell function
can help make the connection process easier to remember. Here's an example that give you a
solarssh command that accepts a SolarNode ID argument, followed by any optional SSH arguments:
To access the SolarNode Setup App, you can configure PuTTY to foward a port on your local machine to
localhost:8080 on the node. Once the SSH connection is established, you can open a browser to
http://localhost:PORT to access the SolarNode Setup App. You can use any available local port, for
@@ -996,7 +1006,7 @@
Finally under the Session configuration category in PuTTY, configure the Host Name and Port to
connect to SolarNode. You can also provide a session name and click the Save button to save all
the settings you have configured, making it easy to load them in the future.
On the Session configuration category, click the Open button to establish the SolarSSH
connection. You might be prompted to confirm the identity of the ssh.solarnetwork.net server
first. Click the Accept button if this is the case.
Some SolarNode features require SolarNetwork Security Tokens to use as authentication credentails
for SolarNetwork services. Security Tokens are managed on the Security Tokens page in
SolarNetwork.
Data Security Tokens allow access to web services that query the data collected by your SolarNodes.
Click the "+" button in the Data Tokens section to generate a new security token. You will be shown
a form where you can give a name, description, and policy restrictions for the token.
The API Paths policy restricts the token to specific SolarNet API methods, based on their URL path. If this policy is
not included then all API methods are allowed.
The Minimum Aggregation policy restricts the token to a minimum data aggregation level. If this policy is not included,
or of the minimum level is set to None, data for any aggregation level is allowed.
The Node IDspolicy restrict the token to specific node IDs. If this policy is not
included, then the token has access to all node IDs in your SolarNetwork account.
The Node Metadata policy restricts the token to specific portions of node-level metadata. If this policy is
not included then all node metadata is allowed.
The Refresh Allowed policy grants applications given a signing key rather than the token's private password can refresh the key as long as the token has not expired.
The Source IDs policy restrict the token to specific datum source IDs. If this policy is not
included, then the token has access to all source IDs in your SolarNetwork account.
The User Metadata policy restricts the token to specific portions of account-level metadata. If this policy is
not included then all user metadata is allowed.
SolarNode plugins support configurable properties, called settings. The SolarNode setup app allows
you to manage settings through simple web forms.
Settings can also be exported and imported in a CSV format, and can be applied when SolarNode starts
up with Auto Settings CSV files. Here is an example of a settings form in the
SolarNode setup app:
-
+
+
+
There are 3 settings represented in that screen shot:
Settings files are CSV (comma separated values) files, easily exported from spreadsheet applications
like Microsoft Excel or Google Sheets. The CSV must include a header row, which is skipped. All
other rows will be processed as settings.
Many plugins provide component factories which allow you to configure any number of instances of
that component. Each component instance is assigned a unique identifier when it is created. In
the SolarNode setup app, the component instance identifiers appear throughout the UI:
-
+
In the previous example CSV the Modbus I/O plugin allows you to
configure any number of Modbus connection components, each with their own specific settings. That is
an example of a component factory. The settings CSV will include a special row to indicate that such
@@ -870,7 +883,7 @@
SolarNode settings can also be configured through Auto Settings, applied when SolarNode starts up,
by placing Settings CSV files in the /etc/solarnode/auto-settings.d directory. These settings are
applied only if they don't already exist or the modified date in the settings file is newer than the
diff --git a/users/setup-app/certificates/index.html b/users/setup-app/certificates/index.html
index f08c334..7915d97 100644
--- a/users/setup-app/certificates/index.html
+++ b/users/setup-app/certificates/index.html
@@ -175,8 +175,13 @@
Choosing the Change Password menu item will take you to a form for changing your password. Fill in your current
password and then your new password, then click the Submit Password button.
Choosing the Change Username menu item will take you to a form for changing your SolarNode username. Fill in your current
password and then your new password, then click the Change Username button.
You can either restart or reboot SolarNode from the Restart SolarNode menu. A restart
means the SolarNode app will restart, while a reboot means the entire SolarNodeOS device will
shut down and boot up again (restarting SolarNode along the way).
You can perform a "factory reset" of SolarNode to remove all your custom settings, certificate,
login credentials, and so on. You also have the option to preserve some SolarNodeOS settings
like WiFi credentials if you like.
The Settings Backup & Restore section provides a way to manage Settings Files
+and Settings Resources, both of which are backups for the configured settings in SolarNode.
+
+
Warning
+
Settings Files and Settings Resources do not include the node's certificate, login
+credentials, or custom plugins. See the Full Backup & Restore section
+for managing "full" backups that do include those items.
+
+
+
+
+
The Export button allows you to download a Settings File with the currently active configuration.
+
The Import button allows you to upload a previously-downloaded Settings File.
+
The Settings Resource menu allows you to download specialized settings files, offered by some components in SolarNode.
+For example the Modbus Device Datum Source plugin offers a specialized CSV file format to make configuring those
+components easier.
+
The Auto backups area will have a list of links, each of which will let you download a
+Settings File that SolarNode automatically created. Each link shows you
+the date the settings backup was created.
The Full Backup & Restore section lets you manage SolarNode "full" backups. Each full backup contains a snapshot
+of the settings you have configured, the node's certificate, login credentials, custom plugins, and more.
+
+
+
+
The Backup Service shows a list of the available Backup Services. Each service
+has its own settings that must be configured for the service to operate. After changing any of the
+selected service's settings, click the Save Settings button to save those changes.
+
The Backup button allows you to create a new backup.
+
The Backups menu allows you to download or restore any available backup.
+
The Import button allows you to upload a previously downloaded backup file.
The File System Backup Service is the default Backup Service provided by SolarNode. It saves
+the backup onto the node itself. In order to be able to restore your settings if the node is damaged
+or lost, you must download a copy of a backup using the Download button, and save the file
+to a safe place.
+
+
Warning
+
If you do not download a copy of a backup, you run the risk of losing your settings and
+node certificate, making it impossible to restore the node in the event of a catastrophic
+hardware failure.
+
+
+
+
+
The configurable settings of the File System Backup Service are:
+
+
+
+
Setting
+
Description
+
+
+
+
+
Backup Directory
+
The folder (on the node) where the backups will be saved.
+
+
+
Copies
+
The number of backup copies to keep, before deleting the oldest backup.
The S3 Backup Service creates cloud-based backups in AWS S3 (or any compatible provider). You
+must configure the credentials and S3 location details to use before any backups can be created.
The Components page lists all the configurable multi-instance components available on
+your SolarNode. Multi-instance means you can configure any number of a given component,
+each with their own settings.
+
For example imagine you want to collect data from a power meter, solar inverter, and weather
+station, all of which use the Modbus protocol. To do that you would configure three instances of
+the Modbus Device component, one for each device.
+
+
+
+
Use the Manage button for any listed compoennt to add or remove instances
+of that component.
+
An instance count badge appears next to any component with at least one instance configured.
The component management page is shown when you click the Manage button for a multi-instance
component. Each component instance's settings are independent, allowing you to integrate with
multiple copies of a device or service.
@@ -686,36 +722,51 @@
-
The component management screen allows you to add, update, and remove component instances.
+
+
The component management screen allows you to add, update, and remove component instances.
Add new component instances by clicking the Add new X button in the top-right, where X
is the name of the component you are managing. You will be given the opportunity to assign a unique
identifier to the new component instance:
-
-
When creating a new component instance you can provide a short name to identify it with.
+
+
When creating a new component instance you can provide a short name to identify it with.
When you add more than one component instance, the identifiers appear as clickable buttons that allow
you to switch between the setting forms for each component.
-
+
Component instance buttons let you switch between each component instance.
Each setting will include a button that will show you a
+brief description of that setting.
+
+
+
Click for brief setting information.
+
+
After making any change, an Active value label will appear, showing the currently active value
+for that setting.
+
+
+
+
After making changes to any component instance's settings, click the Save All Changes button in the top-left
+to commit those changes.
+
+
+
Save All Changes works across all component instances
You can safely switch between and make changes on multiple component instance settings
before clicking the Save All Changes button: your changes across all instances
will be saved.
The Remove all button in the top-right of the page allows you to remove all component instances,
including any customized settings on those instances.
+
+
Warning
+
The Remove all action will delete all your customized settings for all the component
+instances you are managing. When finished it will be as if you never configured this
+component before.
+
+
+
+
Remove all instances with the "Remove all" button.
+
+
You will be asked to confirm removing all instances:
Datum Filters are services that manipulate datum generated by SolarNode plugins before they
are uploaded to SolarNet. See the general Datum Filters section
for more information about how datum filters work and what they are used for.
Global Datum Filters are applied to datum just before posting to SolarNetwork. Once an instance is
created, it is automatically active and will be applied to datum. This differs from User Datum
Filters, which must be explicitly added to a service to be used, either
@@ -692,7 +702,7 @@
All datum generated by SolarNode plugins are added to the Datum Queue for processing. The datum
are processed in the order they are added to the queue. Datum Filters are applied to each datum,
each filter's result passed to the next available filter until all filters have been applied.
User Datum Filters are not applied automatically: they must be explicitly added to a service to
be used, either dircectly or indirectly with a Datum Filter Chain.
This differs from Global Datum Filters which are automatically applied to
diff --git a/users/setup-app/settings/index.html b/users/setup-app/settings/index.html
index 85336da..db315e0 100644
--- a/users/setup-app/settings/index.html
+++ b/users/setup-app/settings/index.html
@@ -7,7 +7,7 @@
-
+
The Components section lists all the configurable multi-instance components available on
-your SolarNode. Multi-instance means you can configure any number of a given component,
-each with their own settings.
-
For example imagine you want to collect data from a power meter, solar inverter, and weather
-station, all of which use the Modbus protocol. To do that you would configure three instances of
-the Modbus Device component, one for each device.
-
Use the Manage button for any listed compoennt to add or remove instances
-of that component.
-
An instance count badge appears next to any component with at least one instance configured.
The Backup & Restore section lets you manage SolarNode backups. Each backup contains a snapshot
-of the settings you have configured, the node's certificate, and custom plugins.
The File System Backup Service is the default Backup Service provided by SolarNode. It saves
-the backup onto the node itself. In order to be able to restore your settings if the node is damaged
-or lost, you must download a copy of a backup using the Download button, and save the file
-to a safe place.
-
-
Warning
-
If you do not download a copy of a backup, you run the risk of losing your settings and
-node certificate, making it impossible to restore the node in the event of a catastrophic
-hardware failure.
-
-
The configurable settings of the File System Backup Service are:
-
-
-
-
Setting
-
Description
-
-
-
-
-
Backup Directory
-
The folder (on the node) where the backups will be saved.
-
-
-
Copies
-
The number of backup copies to keep, before deleting the oldest backup.
The S3 Backup Service creates cloud-based backups in AWS S3 (or any compatible provider). You
-must configure the credentials and S3 location details to use before any backups can be created.
+
The Settings section in SolarNode Setup is where you can configure all available SolarNode settings.
The Settings Backup & Restore section provides a way to manage Settings Files
-and Settings Resources, both of which are backups for the configured settings in SolarNode.
-
-
Warning
-
Settings Files and Settings Resources do not include the node's certificate or custom
-plugins. See the Backup & Restore section for managing "full" backups
-that do include those items.
-
-
-
-
-
The Export button allows you to download a Settings File with the currently active configuration.
-
The Import button allows you to upload a previously-downloaded Settings File.
-
The Settings Resource menu and associated Export to file button allow you to download
-specialized settings files, offered by some components in SolarNode.
-
The Auto backups area will have a list of buttons, each of which will let you download a
-Settings File that SolarNode automatically created. Each button shows you
-the date the settings backup was created.
The SolarNode UI will show the list of active Operational Modes on the
Settings > Operational Modes page. Click the + button to activate modes, and the
button to deactivate an active mode.
SolarNode includes a Command Console page where troubleshooting commands from supporting plugins are
displayed. The page shows a list of available command topics and lets you toggle the inclusion of
each topic's commands at the bottom of the page.
The Modbus TCP Connection and Modbus Serial Connection
components support publishing mbpoll commands under a modbus command topic. The mbpoll
utility is included in SolarNodeOS; if not already installed you can install it by logging in to the
diff --git a/users/setup-app/tools/controls/index.html b/users/setup-app/tools/controls/index.html
index 6245d80..02f828d 100644
--- a/users/setup-app/tools/controls/index.html
+++ b/users/setup-app/tools/controls/index.html
@@ -174,8 +174,13 @@
SolarFlux is the name of a real-time cloud-based service for datum using a
publish/subscribe integration model. SolarNode supports publishing datum to SolarFlux and your own
applications can subscribe to receive datum messages as they are published.
SolarFlux is based on MQTT. To integrate with SolarFlux you use a MQTT client application or library.
See the SolarFlux Integration Guide for more information.
Each datum message is published as a CBOR encoded map by default, to a MQTT
topic based on the datum's source ID. This is essentially a JSON object.
The map keys are the datum property names. You can configure a Datum Encoder to encode datum
@@ -784,7 +794,7 @@
The EventAdmin Appender is supported, and log events are turned into a datum
stream and published to SolarFlux. The log timestamps are used as the datum timestamps.
The topic assigned to log events is log/ with the log name appended. Period characters (.) in
the log name are replaced with slash characters (/). For example, a log name
net.solarnetwork.node.datum.modbus.ModbusDatumDataSource will be turned into the topic
log/net/solarnetwork/node/datum/modbus/ModbusDatumDataSource.
The SolarFlux Upload Service ships with default settings that work out-of-the-box without any
configuration. There are many settings you can change to better suit your needs, however.
By default if the connection to the SolarFlux server is down for any reason, all messages that would
normally be published to the server will be discarded. This is suitable for most applications that
rely on SolarFlux to view real-time status updates only, and SolarNode uploads datum to SolarNet for
diff --git a/users/sysadmin/date-time/index.html b/users/sysadmin/date-time/index.html
index 490c4bf..57223c2 100644
--- a/users/sysadmin/date-time/index.html
+++ b/users/sysadmin/date-time/index.html
@@ -174,8 +174,13 @@
SolarNodeOS includes date and time management functions through the timedatectl
command. Run timedatectl status to view information about the current date and time settings.
SolarNodeOS uses the systemd-timesyncd service to synchronize the node's clock
with internet time servers. Normally no configuration is necessary. You can check the status of the network
time synchronization with timedatectl like:
The NTP servers that SolarNodeOS uses are configured in the /etc/systemd/timesyncd.conf
file. The default configuration uses a pool of Debian servers, which should be suitable for most nodes.
If you would like to change the configuration, edit the timesyncd.conf file and change the NTP= line,
@@ -730,7 +740,7 @@
In order to manually set the date and time, NTP time synchronization must be disabled with timedatectl set-ntp false.
Then you can run timedatectl set-time to set the date:
SolarNode runs on SolarNodeOS, a Debian Linux-based operating system. If you are already
familiar with Debian Linux, or one of the other Linux distributions built from Debian like
Ubuntu Linux, you will find it pretty easy to get around in SolarNodeOS.
SolarNodeOS ships with a solar user account that you can use to log into the operating system.
The default password is solar but may have been changed by a system administrator.
Some commands require administrative permission. The solar user can execute arbitrary commands
with administrative permission by prefixing the command with sudo. For example the reboot command will
reboot SolarNodeOS, but requires administrative permission.
SolarNodeOS uses the systemd-networkd service to manage network devices and their settings.
A network device relates to a physical network hardware device or a software networking component, as recognized
and named by the operating system. For example, the first available ethernet device is typically named eth0 and
the first available WiFi device wlan0.
Network configuration is stored in .network files in the /etc/systemd/network
directory. SolarNodeOS comes with default support for ethernet and WiFi network devices.
The default 10-eth.network file configures the default ethernet network eth0 to use DHCP to automatically
obtain a network address, routing information, and DNS servers to use.
SolarNodeOS networks are configured to use DHCP by default. If you need to re-configure a network to
use DHCP, change the configuration to look like this:
If you need to use a static network address, instead of DHCP, edit the network configuration file
(for example, the 10-eth.network file for the ethernet network), and change it to look like this:
Ethernet network with static address configuration
The default 20-wlan.network file configures the default WiFi network wlan0 to use DHCP to
automatically obtain a network address, routing information, and DNS servers to use. To configure
the WiFi network SolarNode should connect to, run this command:
@@ -789,7 +799,7 @@
Package
Maintenance section for information about installing packages.
For initial setup of a the WiFi settings on a SolarNode it can be helpful for SolarNode to create
its own WiFi network, as an access point. The sn-wifi-autoap@wlan0 service can be used for this.
When enabled, it will monitor the WiFi network status, and when the WiFi connection fails for any
@@ -815,7 +825,7 @@
You can edit the /etc/nftables.conf file to add additional open IP ports as needed. A good place
to insert new rules is after the lines that open ports 80 and 8080:
SolarNodeOS supports a wide variety of software packages. You can install new packages as
well as apply package updates as they become available. The apt command performs these tasks.
For SolarNodeOS to know what packages, or package updates, are available, you need to periodically
update the available package information. This is done with the apt update command:
Update package information
sudoaptupdate# (1)!
@@ -710,11 +720,11 @@
administrative privledges.
It will prompt you for your user account password (typically the solar user).
-
Use the apt search command to search for packages. By default this will match package names and
their descriptions. You can search just for package names by including a --names-only argument.
Search for packages
# search for "name" across package names and descriptions
@@ -743,11 +753,11 @@
# multiple search terms are logically "and"-ed together
aptsearchname1name2
You can remove packages with the apt remove command. That command will preserve any system
configuration associated with the package(s); if you would like to also remove that you can
use the apt purge command.
You can modify the environment variables passed to the SolarNode service, as well as modify
the Java runtime options used. You may want to do this, for example, to turn on Java
remote debugging support or to give the SolarNode process more memory.