5. HomeΒΆ
-TODO
- +The Home page provides you with some links to resources and shows live datum-collecting +activity.
+ +As datum are collected on the node, they will appear in the Datum Properties +section:
+diff --git a/images/users/setup/home-datum-properties@2x.png b/images/users/setup/home-datum-properties@2x.png new file mode 100644 index 0000000..f39d262 Binary files /dev/null and b/images/users/setup/home-datum-properties@2x.png differ diff --git a/images/users/setup/home@2x.png b/images/users/setup/home@2x.png new file mode 100644 index 0000000..f046835 Binary files /dev/null and b/images/users/setup/home@2x.png differ diff --git a/images/users/setup/login@2x.png b/images/users/setup/login@2x.png new file mode 100644 index 0000000..83b2725 Binary files /dev/null and b/images/users/setup/login@2x.png differ diff --git a/images/users/setup/setup-home@2x.png b/images/users/setup/setup-home@2x.png deleted file mode 100644 index 63fc52d..0000000 Binary files a/images/users/setup/setup-home@2x.png and /dev/null differ diff --git a/images/users/setup/setup-login@2x.png b/images/users/setup/setup-login@2x.png deleted file mode 100644 index 47650e8..0000000 Binary files a/images/users/setup/setup-login@2x.png and /dev/null differ diff --git a/images/users/setup/setup-save-changes@2x.png b/images/users/setup/setup-save-changes@2x.png deleted file mode 100644 index e761d86..0000000 Binary files a/images/users/setup/setup-save-changes@2x.png and /dev/null differ diff --git a/images/users/setup/setup-setting-changed-value@2x.png b/images/users/setup/setup-setting-changed-value@2x.png deleted file mode 100644 index 8062286..0000000 Binary files a/images/users/setup/setup-setting-changed-value@2x.png and /dev/null differ diff --git a/images/users/setup/setup-settings@2x.png b/images/users/setup/setup-settings@2x.png deleted file mode 100644 index 06b1b2b..0000000 Binary files a/images/users/setup/setup-settings@2x.png and /dev/null differ diff --git a/search/search_index.json b/search/search_index.json index 329f263..a89ac53 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"SolarNode Handbook","text":"
This handbook provides guides and reference documentation about SolarNode, the distributed computing part of SolarNetwork.
SolarNode is the swiss army knife for IoT monitoring and control. It is deployed on inexpensive computers in homes, buildings, vehicles, and even EV chargers, connected to any number of sensors, meters, building automation systems, and more. There are several SolarNode icons in the image below. Can you spot them all?
"},{"location":"#user-guide","title":"User Guide","text":"For information on getting and using SolarNode, see the User Guide.
"},{"location":"#developer-guide","title":"Developer Guide","text":"For information on working on the SolarNode codebase, such as writing a plugin, see the Developer Guide.
"},{"location":"developers/","title":"Developer Guide","text":"This section of the handbook is geared towards developers working with the SolarNode codebase to develop a plugin.
"},{"location":"developers/#solarnode-source","title":"SolarNode source","text":"The core SolarNode platform code is available on GitHub.
"},{"location":"developers/#getting-started","title":"Getting started","text":"See the SolarNode Development Guide to set up your own development environment for writing SolarNode plugins.
"},{"location":"developers/#solarnode-debugging","title":"SolarNode debugging","text":"You can enable Java remote debugging for SolarNode on a node device for SolarNode plugin development or troubleshooting by modifying the SolarNode service environment. Once enabled, you can use SSH port forwarding to enable Java remote debugging in your Java IDE of choice.
To enable Java remote debugging, copy the /etc/solarnode/env.conf.example
file to /etc/solarnode/env.conf
. The example already includes this support, using port 9142
for the debugging port. Then restart the solarnode
service:
$ cp /etc/solarnode/env.conf.example /etc/solarnode/env.conf\n$ sn-restart\n
Then you can use ssh
from your development machine to forward a local port to the node's 9142
port, and then have your favorite IDE establish a remote debugging connection on your local port.
For example, on a Linux or macOS machine you could forward port 8000
to a node's port 9142
like this:
$ ssh -L8000:localhost:9142 solar@solarnode\n
Once that ssh
connection is established, your IDE can be used to connect to localhost:8000
for a remote Java debugging session.
The SolarNode platform has been designed to be highly modular and dynamic, by using a plugin-based architecture. The plugin system SolarNode uses is based on the OSGi specification, where plugins are implemented as OSGi bundles. SolarNode can be thought of as a collection of OSGi bundles that, when combined and deployed together in an OSGi framework like Eclipse Equinox, form the complete SolarNode platform.
To summarize: everything in SolarNode is a plugin!
OSGi bundles and Eclipse plug-ins
Each OSGi bundle in SolarNode comes configured as an Eclipse IDE (or simply Eclipse) plug-in project. Eclipse refers to OSGi bundles as \"plug-ins\" and its OSGi development tools are collectively known as the Plug-in Development Environment, or PDE for short. We use the terms bundle and plug-in and plugin somewhat interchangably in the SolarNode project. Although Eclipse is not actually required for SolarNode development, it is very convenient.
Practically speaking a plugin, which is an OSGi bundle, is simply a Java JAR file that includes the Java code implementing your plugin and some OSGi metadata in its Manifest. For example, here is the contents of the net.solarnetwork.common.jdt
plugin JAR:
META-INF/MANIFEST.MF\nnet/solarnetwork/common/jdt/Activator.class\nnet/solarnetwork/common/jdt/ClassLoaderNameEnvironment.class\nnet/solarnetwork/common/jdt/CollectingCompilerRequestor.class\nnet/solarnetwork/common/jdt/CompilerUtils.class\nnet/solarnetwork/common/jdt/JdtJavaCompiler.class\nnet/solarnetwork/common/jdt/MapClassLoader.class\nnet/solarnetwork/common/jdt/ResourceCompilationUnit.class\n
"},{"location":"developers/osgi/#services","title":"Services","text":"Central to the plugin architecture SolarNode uses is the concept of a service. In SolarNode a service is defined by a Java interface. A plugin can advertise a service to the SolarNode runtime. Plugins can lookup a service in the SolarNode runtime and then invoke the methods defined on it.
The advertising/lookup framework SolarNode uses is provided by OSGi. OSGi provides several ways to manage services. In SolarNode the most common is to use Blueprint XML documents to both publish services (advertise) and acquire references to services (lookup).
"},{"location":"developers/osgi/blueprint-compendium/","title":"Gemini Blueprint Compendium","text":"The Gemini Blueprint implementation provides some useful extensions that SolarNode makes frequent use of. To use the extensions you need to declare the Gemini Blueprint Compendium namespace in your Blueprint XML file, like this:
Gemini Blueprint Compendium XML declaration<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\nxmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\nxmlns:osgix=\"http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium\"\nxmlns:beans=\"http://www.springframework.org/schema/beans\"\nxsi:schemaLocation=\"\n http://www.osgi.org/xmlns/blueprint/v1.0.0\n http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\n http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium\n http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium/gemini-blueprint-compendium.xsd\n http://www.springframework.org/schema/beans\n http://www.springframework.org/schema/beans/spring-beans.xsd\">\n
This example declares the Gemini Blueprint Compendium XML namespace prefix osgix
and a related Spring Beans namespace prefix beans
. You will see those used throughout SolarNode.
Managed Properties provide a way to use the Configuration Admin service to manage user-configurable service properties. Conceptually it is like linking a class to a set of dynamic runtime Settings: Configuration Admin provides change event and persistence APIs for the settings, and the Managed Properties applies those settings to the linked service.
Imagine you have a service class MyService
with a configurable property level
. We can make that property a managed, persistable setting by adding a <osgix:managed-properties>
element to our Blueprint XML, like this:
package com.example;\n\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.Map;\nimport net.solarnetwork.node.service.support.BaseIdentifiable;\nimport net.solarnetwork.settings.SettingSpecifier;\nimport net.solarnetwork.settings.SettingSpecifierProvider;\nimport net.solarnetwork.settings.SettingsChangeObserver;\nimport net.solarnetwork.settings.support.BasicTextFieldSettingSpecifier;\n\n/**\n * My super-duper service.\n *\n * @author matt\n * @version 1.0\n */\npublic class MyService extends BaseIdentifiable\nimplements SettingsChangeObserver, SettingSpecifierProvider {\n\nprivate int level;\n\n@Override\npublic String getSettingUid() {\nreturn \"com.example.MyService\"; // (1)!\n}\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nreturn Collections.singletonList(\nnew BasicTextFieldSettingSpecifier(\"level\", String.valueOf(0)));\n}\n\n@Override\npublic void configurationChanged(Map<String, Object> properties) {\n// the settings have changed; do something\n}\n\npublic int getLevel() {\nreturn level;\n}\n\npublic void setLevel(int level) {\nthis.level = level;\n}\n\n}\n
title = Super-duper Service\ndesc = This service does it all.\n\nlevel.key = Level\nlevel.desc = This one goes to 11.\n
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\nxmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\nxmlns:osgix=\"http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium\"\nxmlns:beans=\"http://www.springframework.org/schema/beans\"\nxsi:schemaLocation=\"\n http://www.osgi.org/xmlns/blueprint/v1.0.0\n http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\n http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium\n http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium/gemini-blueprint-compendium.xsd\n http://www.springframework.org/schema/beans\n http://www.springframework.org/schema/beans/spring-beans.xsd\">\n\n<service>\n<interfaces>\n<value>net.solarnetwork.settings.SettingSpecifierProvider</value>\n</interfaces>\n<bean class=\"com.example.MyService\"><!-- (1)! -->\n<osgix:managed-properties\npersistent-id=\"com.example.MyService\"\nautowire-on-update=\"true\"\nupdate-method=\"configurationChanged\"/>\n<property name=\"messageSource\">\n<bean class=\"org.springframework.context.support.ResourceBundleMessageSource\">\n<property name=\"basenames\" value=\"com.example.MyService\"/>\n</bean>\n</property>\n</bean>\n</service>\n\n</blueprint>\n
<osgi:managed-properties>
element within the actual service <bean>
element you want to apply the managed settings on.persistent-id
attribute value matches the getSettingsUid()
value in MyService.java
autowire-on-update
method toggles having the Managed Properties automatically applied by Gemini Blueprint; you can set to false
and provide an update-method
if you want to handle changes yourselfupdate-method
attribute is optional; it provides a way for the service to be notified after the Configuration Admin settings have been applied.When this plugin is deployed in SolarNode, the component will appear on the main Settings page and offer a configurable Level setting, like this:
"},{"location":"developers/osgi/blueprint-compendium/#managed-service-factory","title":"Managed Service Factory","text":"The Managed Service Factory service provide a way to use the Configuration Admin service to manage multiple copies of a user-configurable service's properties. Conceptually it is like linking a class to a set of dynamic runtime Settings, but you can create as many independent copies as you like. Configuration Admin provides change event and persistence APIs for the settings, and the Managed Service Factory applies those settings to each linked service instance.
Imagine you have a service class ManagedService
with a configurable property level
. We can make that property a factory of managed, persistable settings by adding a <osgix:managed-service-factory>
element to our Blueprint XML, like this:
package com.example;\n\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.Map;\nimport net.solarnetwork.node.service.support.BaseIdentifiable;\nimport net.solarnetwork.settings.SettingSpecifier;\nimport net.solarnetwork.settings.SettingSpecifierProvider;\nimport net.solarnetwork.settings.SettingsChangeObserver;\nimport net.solarnetwork.settings.support.BasicTextFieldSettingSpecifier;\n\n/**\n * My super-duper managed service.\n *\n * @author matt\n * @version 1.0\n */\npublic class ManagedService extends BaseIdentifiable\nimplements SettingsChangeObserver, SettingSpecifierProvider {\n\nprivate int level;\n\n@Override\npublic String getSettingUid() {\nreturn \"com.example.ManagedService\"; // (1)!\n}\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nreturn Collections.singletonList(\nnew BasicTextFieldSettingSpecifier(\"level\", String.valueOf(0)));\n}\n\n@Override\npublic void configurationChanged(Map<String, Object> properties) {\n// the settings have changed; do something\n}\n\npublic int getLevel() {\nreturn level;\n}\n\npublic void setLevel(int level) {\nthis.level = level;\n}\n\n}\n
title = Super-duper Managed Service\ndesc = This managed service does it all.\n\nlevel.key = Level\nlevel.desc = This one goes to 11.\n
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\nxmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\nxmlns:osgix=\"http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium\"\nxmlns:beans=\"http://www.springframework.org/schema/beans\"\nxsi:schemaLocation=\"\n http://www.osgi.org/xmlns/blueprint/v1.0.0\n http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\n http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium\n http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium/gemini-blueprint-compendium.xsd\n http://www.springframework.org/schema/beans\n http://www.springframework.org/schema/beans/spring-beans.xsd\">\n\n<bean id=\"messageSource\" class=\"org.springframework.context.support.ResourceBundleMessageSource\">\n<property name=\"basenames\" value=\"com.example.ManagedService\"/>\n</bean>\n\n<service interface=\"net.solarnetwork.settings.SettingSpecifierProviderFactory\"><!-- (1)! -->\n<bean class=\"net.solarnetwork.settings.support.BasicSettingSpecifierProviderFactory\">\n<property name=\"displayName\" value=\"Super-duper Managed Service\"/>\n<property name=\"factoryUid\" value=\"com.example.ManagedService\"/><!-- (2)! -->\n<property name=\"messageSource\" ref=\"messageSource\"/>\n</bean>\n</service>\n\n<osgix:managed-service-factory\nfactory-pid=\"com.example.ManagedService\"\nautowire-on-update=\"true\"\nupdate-method=\"configurationChanged\"><!-- (3)! -->\n<osgix:interfaces>\n<beans:value>net.solarnetwork.settings.SettingSpecifierProvider</beans:value>\n</osgix:interfaces>\n<osgix:service-properties>\n<beans:entry key=\"settingPid\" value=\"com.example.ManagedService\"/>\n</osgix:service-properties>\n<bean class=\"com.example.ManagedService\">\n<property name=\"messageSource\" ref=\"messageSource\"/>\n</bean>\n</osgix:managed-service-factory>\n\n</blueprint>\n
SettingSpecifierProviderFactory
service is what makes the managed service factory appear as a component in the SolarNode Settings UI.factoryUid
defines the Configuration Admin factory PID and the Settings UID.<osgix:managed-service-factory>
element in your Blueprint XML, with a nested <bean>
\"template\" within it. The template bean will be instantiated for each service instance instantiated by the Managed Service Factory.factory-pid
attribute value matches the getSettingsUid()
value in ManagedService.java
and the factoryUid
declared in #2.autowire-on-update
method toggles having the Managed Properties automatically applied by Gemini Blueprint; you can set to false
and provide an update-method
if you want to handle changes yourselfupdate-method
attribute is optional; it provides a way for the service to be notified after the Configuration Admin settings have been applied.When this plugin is deployed in SolarNode, the managed component will appear on the main Settings page like this:
After clicking on the Manage button next to this component, the Settings UI allows you to create any number of instances of the component, each with their own setting values. Here is a screen shot showing two instances having been created:
"},{"location":"developers/osgi/blueprint/","title":"Blueprint","text":"SolarNode supports the OSGi Blueprint Container Specification so plugins can declare their service dependencies and register their services by way of an XML file deployed with the plugin. If you are familiar with the Spring Framework's XML configuration, you will find Blueprint very similar. SolarNode uses the Eclipse Gemini implementation of the Blueprint specification, which is directly derived from Spring Framework.
Note
This guide will not document the full Blueprint XML syntax. Rather, it will attempt to showcase the most common parts used in SolarNode. Refer to the Blueprint Container Specification for full details of the specification.
"},{"location":"developers/osgi/blueprint/#example","title":"Example","text":"Imagine you are working on a plugin and have a com.example.Greeter
interface you would like to register as a service for other plugins to use, and an implementation of that service in com.example.HelloGreeter
that relies on the Placeholder Service provided by SolarNode:
package com.example;\npublic interface Greeter {\n\n/**\n * Greet something with a given name.\n * @param name the name to greet\n * @return the greeting\n */\nString greet(String name);\n\n}\n
package com.example;\nimport net.solarnetwork.node.service.PlaceholderService;\npublic class HelloGreeter implements Greeter {\n\nprivate final PlaceholderService placeholderService;\n\npublic HelloGreeter(PlaceholderService placeholderService) {\nsuper();\nthis.placeholderService = placeholderService;\n}\n\n@Override\npublic String greet(String name) {\nreturn placeholderService.resolvePlaceholders(\nString.format(\"Hello %s, from {myName}.\", name),\nnull);\n}\n}\n
Assuming the PlaceholderService
will resolve {name}
to Office Node
, we would expect the greet()
method to run like this:
Greeter greeter = resolveGreeterService();\nString result = greeter.greet(\"Joe\");\n// result is \"Hello Joe, from Office Node.\"\n
In the plugin we then need to:
net.solarnetwork.node.service.PlaceholderService
to pass to the HelloGreeter(PlaceholderService)
constructorHelloGreeter
comopnent as a com.example.Greeter
service in the SolarNode platformHere is an example Blueprint XML document that does both:
Blueprint XML example<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\nxmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\nxsi:schemaLocation=\"\n http://www.osgi.org/xmlns/blueprint/v1.0.0\n https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\">\n\n<!-- Declare a reference (lookup) to the PlaceholderService -->\n<reference id=\"placeholderService\"\ninterface=\"net.solarnetwork.node.service.PlaceholderService\"/>\n\n<service interface=\"com.example.Greeter\">\n<bean class=\"com.example.HelloGreeter\">\n<argument ref=\"placeholderService\">\n</bean>\n</service>\n\n</blueprint>\n
"},{"location":"developers/osgi/blueprint/#blueprint-xml-resources","title":"Blueprint XML Resources","text":"Blueprint XML documents are added to a plugin's OSGI-INF/blueprint
classpath location. A plugin can provide any number of Blueprint XML documents there, but often a single file is sufficient and a common convention in SolarNode is to name it module.xml
.
A minimal Blueprint XML file is structured like this:
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\nxmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\nxsi:schemaLocation=\"\n http://www.osgi.org/xmlns/blueprint/v1.0.0\n https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\">\n\n<!-- Plugin components configured here -->\n\n</blueprint>\n
"},{"location":"developers/osgi/blueprint/#service-references","title":"Service References","text":"To make use of services registered by SolarNode plugins, you declare a reference to that service so you may refer to it elsewhere within the Blueprint XML. For example, imagine you wanted to use the Placeholder Service in your component. You would obtain a reference to that like this:
<reference id=\"placeholderService\"\ninterface=\"net.solarnetwork.node.service.PlaceholderService\"/>\n
The id
attribute allows you to refer to this service elsewhere in your Blueprint XML, while interface
declares the fully-qualified Java interface of the service you want to use.
Components in Blueprint are Java classes you would like instantiated when your plugin starts. They are declared using a <bean>
element in Blueprint XML. You can assign each component a unique identifier using an id
attribute, and then you can refer to that component in other components.
Imagine an example component class com.example.MyComponent
:
package com.example;\n\nimport net.solarnetwork.node.service.PlaceholderService;\n\npublic class MyComponent {\n\nprivate final PlaceholderService placeholderService;\nprivate int minimum;\n\npublic MyComponent(PlaceholderService placeholderService) {\nsuper();\nthis.placeholderService = placeholderService;\n}\n\npublic String go() {\nreturn PlaceholderService.resolvePlaceholders(placeholderService,\n\"{building}/temp\", null);\n}\n\npublic int getMinimum() {\nreturn minimum;\n}\n\npublic void setMinimum(int minimum) {\nthis.minimum = minimum;\n}\n}\n
Here is how that component could be declared in Blueprint:
<reference id=\"placeholderService\"\ninterface=\"net.solarnetwork.node.service.PlaceholderService\"/>\n\n<bean id=\"myComponent\" class=\"com.example.MyComponent\">\n<argument ref=\"placeholderService\">\n<property name=\"minimum\" value=\"10\"/>\n</bean>\n
"},{"location":"developers/osgi/blueprint/#constructor-arguments","title":"Constructor Arguments","text":"If your component requires any constructor arguments, they can be specified with nested <argument>
elements in Blueprint. The <argument>
value can be specified as a reference to another component using a ref
attribute whose value is the id
of that component, or as a literal value using a value
attribute.
For example:
<bean id=\"myComponent\" class=\"com.example.MyComponent\">\n<argument ref=\"placeholderService\">\n<argument value=\"10\">\n
"},{"location":"developers/osgi/blueprint/#property-accessors","title":"Property Accessors","text":"You can configure mutable class properties on a component with nested <property name=\"\">
elements in Blueprint. A mutable property is a Java setter method. For example an int
property minimum
would be associated with a Java setter method public void setMinimum(int value)
.
The <property>
value can be specified as a reference to another component using a ref
attribute whose value is the id
of that component, or as a literal value using a value
attribute.
For example:
<bean id=\"myComponent\" class=\"com.example.MyComponent\">\n<property name=\"placeholderService\" ref=\"placeholderService\">\n<argument name=\"minimum\" value=\"10\">\n
"},{"location":"developers/osgi/blueprint/#startstop-hooks","title":"Start/Stop Hooks","text":"Blueprint can invoke a method on your component when it has finished instantiating and configuring the object (when the plugin starts), and another when it destroys the instance (when the plugin is stopped). You simply provide the name of the method you would like Blueprint to call in the init-method
and destroy-method
attributes of the <bean>
element. For example:
<bean id=\"myComponent\" class=\"com.example.MyComponent\"\ninit-method=\"startup\"\ndestroy-method=\"shutdown\">\n
"},{"location":"developers/osgi/blueprint/#service-registration","title":"Service Registration","text":"You can make any component available to other plugins by registering the component with a <service>
element that declares what interface(s) your component provides. Once registered, other plugins can make use of your component, for example by declaring a <referenece>
to your component class in their Blueprint XML.
Note
You can only register Java interfaces as services, not classes.
For example, imagine a com.example.Startable
interface like this:
package com.example;\npublic interface Startable {\n/**\n * Start!\n * @return the result\n */\nString go();\n}\n
We could implement that interface in the MyComponent
class, like this:
package com.example;\n\npublic class MyComponent implements Startable {\n\n@Override\npublic String go() {\nreturn \"Gone!\";\n}\n}\n
We can register MyComponent
as a Startable
service using a <service>
element like this in Blueprint:
<service interface=\"com.example.Startable\">\n<!-- The service implementation is nested directly within -->\n<bean class=\"com.example.MyComponent\"/>\n</service>\n
<!-- The service implementation is referenced indirectly... -->\n<service ref=\"myComponent\" interface=\"com.example.Startable\"/>\n\n<!-- ... to a bean with a matching id attribute -->\n<bean id=\"myComponent\" class=\"com.example.MyComponent\"/>\n
"},{"location":"developers/osgi/blueprint/#multiple-service-interfaces","title":"Multiple Service Interfaces","text":"You can advertise any number of service interfaces that your component supports, by nesting an <interfaces>
element within the <service>
element, in place of the interface
attribute. For example:
<service ref=\"myComponent\">\n<interfaces>\n<value>com.example.Startable</value>\n<value>com.example.Stopable</value>\n</interfaces>\n</service>\n
"},{"location":"developers/osgi/blueprint/#export-service-packages","title":"Export service packages","text":"For a registered service to be of any use to another plugin, the package the service is defined in must be exported by the plugin hosting that package. That is because the plugin wishing to add a reference to the service will need to import the package in order to use it.
For example, the plugin that hosts the com.example.service.MyService
service would need a manifest file that includes an Export-Package
attribute similar to:
Export-Package: com.example.service;version=\"1.0.0\"\n
"},{"location":"developers/osgi/configuration-admin/","title":"Configuration Admin","text":"TODO
"},{"location":"developers/osgi/life-cycle/","title":"Life cycle","text":"Plugins in SolarNode can be added to and removed from the platform at any time without restarting the SolarNode process, because of the Life Cycle process OSGi manages. The life cycle of a plugin consists of a set of states and OSGi will transition a plugin's state over the course of the plugin's life.
The available plugin states are:
State DescriptionINSTALLED
The plugin has been successfully added to the OSGi framework. RESOLVED
All package dependencies that the bundle needs are available. This state indicates that the plugin is either ready to be started or has stopped. STARTING
The plugin is being started by the OSGi framework, but it has not finished starting yet. ACTIVE
The plugin has been successfully started and is running. STOPPING
The plugin is being stopped by the OSGi framework, but it has not finished stopping yet. UNINSTALLED
The plugin has been removed by the OSGi framework. It cannot change to another state. The possible changes in state can be visualized in the following state-change diagram:
Faisal.akeel, Public domain, via Wikimedia Common
"},{"location":"developers/osgi/life-cycle/#activator","title":"Activator","text":"A plugin can opt in to receiving callbacks for the start/stop state transitions by providing an org.osgi.framework.BundleActivator
implementation and declaring that class in the Bundle-Activator
manifest attribute. This can be useful when a plugin needs to initialize some resources when the plugin is started, and then release those resources when the plugin is stopped.
public interface BundleActivator {\n/**\n * Called when this bundle is started so the Framework can perform the\n * bundle-specific activities necessary to start this bundle.\n *\n * @param context The execution context of the bundle being started.\n */\npublic void start(BundleContext context) throws Exception;\n\n/**\n * Called when this bundle is stopped so the Framework can perform the\n * bundle-specific activities necessary to stop the bundle.\n *\n * @param context The execution context of the bundle being stopped.\n */\npublic void stop(BundleContext context) throws Exception;\n}\n
package com.example.activator;\nimport org.osgi.framework.BundleActivator;\nimport org.osgi.framework.BundleContext;\npublic class Activator implements BundleActivator {\n\n@Override\npublic void start(BundleContext bundleContext) throws Exception {\n// initialize resources here\n}\n\n@Override\npublic void stop(BundleContext bundleContext) throws Exception {\n// clean up resources here\n}\n}\n
Manifest-Version: 1.0\nBundle-ManifestVersion: 2\nBundle-Name: Example Activator\nBundle-SymbolicName: com.example.activator\nBundle-Version: 1.0.0\nBundle-Activator: com.example.activator.Activator\nImport-Package: org.osgi.framework;version=\"[1.3,2.0)\"\n
Tip
Often making use of the component life cycle hooks available in Blueprint are sufficient and no BundleActivator
is necessary.
As SolarNode plugins are OSGi bundles, which are Java JAR files, every plugin automatically includes a META-INF/MANIFEST.MF
file as defined in the Java JAR File Specification. The MANIFEST.MF
file is where OSGi metadata is included, turning the JAR into an OSGi bundle (plugin).
Here is an example snippet from the SolarNode net.solarnetwork.common.jdt plugin:
Example plugin MANIFEST.MFManifest-Version: 1.0\nBundle-ManifestVersion: 2\nBundle-Name: Java Compiler Service (JDT)\nBundle-SymbolicName: net.solarnetwork.common.jdt\nBundle-Description: Java complier using Eclipse JDT.\nBundle-Version: 3.0.0\nBundle-Vendor: SolarNetwork\nBundle-RequiredExecutionEnvironment: JavaSE-1.8\nBundle-Activator: net.solarnetwork.common.jdt.Activator\nExport-Package:\nnet.solarnetwork.common.jdt;version=\"2.0.0\"\nImport-Package:\nnet.solarnetwork.service;version=\"[1.0,2.0)\",\norg.eclipse.jdt.core.compiler,\norg.eclipse.jdt.internal.compiler,\norg.osgi.framework;version=\"[1.5,2.0)\",\norg.slf4j;version=\"[1.7,2.0)\",\norg.springframework.context;version=\"[5.3,6.0)\",\norg.springframework.core.io;version=\"[5.3,6.0)\",\norg.springframework.util;version=\"[5.3,6.0)\"\n
The rest of this document will describe this structure in more detail.
"},{"location":"developers/osgi/manifest/#versioning","title":"Versioning","text":"In OSGi plugins are always versioned and and Java packages may be versioned. Versions follow Semantic Versioning rules, generally using this syntax:
major.minor.patch\n
In the manifest example you can see the plugin version 3.0.0
declared in the Bundle-Version
attribute:
Bundle-Version: 3.0.0\n
The example also declares (exports) a net.solarnetwork.common.jdt
package for other plugins to import (use) as version 2.0.0
, in the Export-Package
attribute:
Export-Package:\nnet.solarnetwork.common.jdt;version=\"2.0.0\"\n
The example also uses (imports) a versioned package net.solarnetwork.service
using a version range greater than or equal to 1.0
and less than 2.0
and an unversioned package org.eclipse.jdt.core.compiler
, in the Import-Package
attribute:
Import-Package:\nnet.solarnetwork.service;version=\"[1.0,2.0)\",\norg.eclipse.jdt.core.compiler,\n
Tip
Some plugins, and core Java system packages, do not declare package versions. You should declare package versions in your own plugins.
"},{"location":"developers/osgi/manifest/#version-ranges","title":"Version ranges","text":"Some OSGi version attributes allow version ranges to be declared, such as the Import-Package
attribute. A version range is a comma-delimited lower,upper
specifier. Square brackets are used to represent inclusive values and round brackets represent exclusive values. A value can be omitted to reprsent an unbounded value. Here are some examples:
[1.0,2.0)
1.0.0 \u2264 x < 2.0.0 Greater than or equal to 1.0.0
and less than 2.0.0
(1,3)
1.0.0 < x < 3.0.0 Greater than 1.0.0
and less than 3.0.0
[1.3.2,)
1.3.2 \u2264 x Greater than or eequal to 1.3.2
1.3.2
1.3.2 \u2264 x Greater than or eequal to 1.3.2
(shorthand notation) Implied unbounded range
An inclusive lower, unbounded upper range can be specifeid using a shorthand notation of just the lower bound, like 1.3.2
.
Each plugin must provide the following attributes:
Attribute Example DescriptionBundle-ManifestVersion
2 declares the OSGi bundle manifest version; always 2
Bundle-Name
Awesome Data Source a concise human-readable name for the plugin Bundle-SymbolicName
com.example.awesome a machine-readable, universally unique identifier for the plugin Bundle-Version
1.0.0 the plugin version Bundle-RequiredExecutionEnvironment
JavaSE-1.8 a required OSGi execution environment"},{"location":"developers/osgi/manifest/#recommended-attributes","title":"Recommended attributes","text":"Each plugin is recommended to provide the following attributes:
Attribute Example DescriptionBundle-Description
An awesome data source that collects awesome data. a longer human-readable description of the plugin Bundle-Vendor
ACME Corp the name of the entity or organisation that authored the plugin"},{"location":"developers/osgi/manifest/#common-attributes","title":"Common attributes","text":"Other common manifest attributes are:
Attribute Example DescriptionBundle-Activator
com.example.awesome.Activator a fully-qualified Java class name that implements the org.osgi.framework.BundleActivator
interface, to handle plugin lifecycle events; see Activator for more information Export-Package
net.solarnetwork.common.jdt;version=\"2.0.0\" a package export list Import-Package
net.solarnetwork.service;version=\"[1.0,2.0)\" a package dependency list"},{"location":"developers/osgi/manifest/#package-dependencies","title":"Package dependencies","text":"A plugin must declare the Java packages it directly uses in a Import-Package
attribute. This attribute accpets a comma-delimited list of package specifications that take the basic form of:
PACKAGE;version=\"VERSION\"\n
For example here is how the net.solarnetwork.service
package, versioned between 1.0
and 2.0
, would be declared:
Import-Package: net.solarnetwork.service;version=\"[1.0,2.0)\"\n
Direct package use means your plugin has code that imports a class from a given package. Classes in an imported package may import other packages indirectly; you do not need to import those packages as well. For example if you have code like this:
import net.solarnetwork.service.OptionalService;\n
Then you will need to import the net.solarnetwork.service
package.
Note
The SolarNode platform automatically imports core Java packages like java.*
so you do not need to declare those.
Also note that in some scenarios a package used by a class in an imported package becomes a direct dependency. For example when you extend a class from an imported package and that class imports other packages. Those other packages may become direct dependencies that you also need to import.
"},{"location":"developers/osgi/manifest/#child-package-dependencies","title":"Child package dependencies","text":"If you import a package in your plugin, any child packages that may exist are not imported as well. You must import every individual package you need to use in your plugin.
For example to use both net.solarnetwork.service
and net.solarnetwork.service.support
you would have an Import-Package
attribute like this:
Import-Package:\nnet.solarnetwork.service;version=\"[1.0,2.0)\",\nnet.solarnetwork.service.support;version=\"[1.1,2.0)\"\n
"},{"location":"developers/osgi/manifest/#package-exports","title":"Package exports","text":"A plugin can export any package it provides, making the resources within that package available to other plugins to import and use. Declare exoprted packages with a Export-Package
attribute. This attribute takes a comma-delimited list of versioned package specifications. Note that version ranges are not supported: you must declare the exact version of the package you are exporting. For example:
Export-Package: com.example.service;version=\"1.0.0\"\n
Note
Exported packages should not be confused with services. Exported packages give other plugins access to the classes and any other resources within those packages, but do not provide services to the platform. You can use Blueprint to register services. Keep in mind that any service a plugin registers must exist within an exported package to be of any use.
"},{"location":"developers/services/backup-manager/","title":"Backup Manager","text":"The net.solarnetwork.node.backup.BackupManager
API provides SolarNode with a modular backup system composed of Backup Services that provide storage for backup data and Backup Resource Providers that contribute data to be backed up and support restoring backed up data.
The Backup Manager coordinates the creation and restoration of backups, delegating most of its functionality to the active Backup Service. The active Backup Service can be controlled through configuration.
The Backup Manager also supports exporting and importing Backup Archives, which are just .zip
archives using a defined folder structure to preserve all backup resources within a single backup.
This design of the SolarNode backup system makes it easy for SolarNode plugins to contribute resources to backups, without needing to know where or how the backup data is ultimately stored.
What goes in a Backup?
In SolarNode a Backup will contain all the critical settings that are unique to that node, such as:
The Backup Manager can be configured under the net.solarnetwork.node.backup.DefaultBackupManager
configuration namespace:
backupRestoreDelaySeconds
15 A number of seconds to delay the attempt of restoring a backup, when a backup has been previously marked for restoration. This delay gives the platform time to boot up and register the backup resource providers and other services required to perform the restore. preferredBackupServiceKey
net.solarnetwork.node.backup.FileSystemBackupService The key of the preferred (active) Backup Service to use."},{"location":"developers/services/backup-manager/#backup","title":"Backup","text":"The net.solarnetwork.node.backup.Backup
API defines a unique backup, created by a Backup Service. Backups are uniquely identified with a unique key assigned by the Backup Service that creates them.
A Backup
does not itself provide access to any of the resources associated with the backup. Instead, the getBackupResources()
method of BackupService
returns them.
The Backup Manager supports exporting and importing specially formed .zip
archives that contain a complete Backup. These archives are a convenient way to transfer settings from one node to another, and can be used to restore SolarNode on a new device.
The net.solarnetwork.node.backup.BackupResource
API defines a unique item within a Backup. A Backup Resource could be a file, a database table, or anything that can be serialized to a stream of bytes. Backup Resources are both provided by, and restored with, Backup Resource Providers so it is up to the Provider implementation to know how to generate and then restore the Resources it manages.
The net.solarnetwork.node.backup.BackupResourceProvider
API defines a service that can both generate and restore Backup Resources. Each implementation is identified by a unique key, typically the fully-qualified Java class name of the implementation.
When a Backup is created, all Backup Resource Provider services registered in SolarNode will be asked to contribute their Backup Resources, using the getBackupResources()
method.
When a Backup is restored, Backup Resources will be passed to their associated Provider with the restoreBackupResource(BackupResource)
method.
The net.solarnetwork.node.backup.BackupService
API defines the bulk of the SolarNode backup system. Each implementation is identified by a unique key, typically the fully-qualified Java class name of the implementation.
To create a Backup, use the performBackup(Iterable<BackupResource>)
method, passing in the collection of Backup Resources to include.
To list the available Backups, use the getAvailableBackups(Backup)
method.
To view a single Backup, use the backupForKey(String)
method.
To list the resources in a Backup, use the getBackupResources(Backup)
method.
SolarNode provides the net.solarnetwork.node.backup.FileSystemBackupService
default Backup Service implementation that saves Backup Archives to the node's own file system.
The net.solarnetwork.node.backup.s3
plugin provides the net.solarnetwork.node.backup.s3.S3BackupService
Backup Service implementation that saves all Backup data to AWS S3.
A plugin can publish a net.solarnetwork.service.CloseableService
and SolarNode will invoke the closeService()
method on it when that service is destroyed. This can be useful in some situations, to make sure resources are freed when a service is no longer needed.
Blueprint does provide the destroy-method
stop hook that can be used in many situations, however Blueprint does not allow this in all cases. For example a <bean>
nested within a <service>
element does not allow a destroy-method
:
<service interface=\"com.example.MyService\">\n<!-- destroy-method not allowed here: -->\n<bean class=\"com.example.MyComponent\"/>\n</service>\n
If MyComponent
also implemented CloseableService
then we can achieve the desired stop hook like this:
<service>\n<interfaces>\n<value>com.example.MyService</value>\n<value>net.solarnetwork.service.CloseableService</value>\n</interfaces>\n<bean class=\"com.example.MyComponent\"/>\n</service>\n
Note
Note that the above example CloseableService
is not strictly needed, as the same effect could be acheived by un-nesting the <bean>
from the <service>
element, like this:
<bean id=\"myComponent\" class=\"com.example.MyComponent\" destroy-method=\"close\"/>\n<service ref=\"myComponent\" interface=\"com.example.MyService\"/>\n
There are situations where un-nesting is not possible, which is where CloseableService
can be helpful.
The DatumDataSourcePollManagedJob
class is a Job Service implementation that can be used to let users schedule the generation of datum from a Datum Data Source. Typically this is configured as a Managed Service Factory so users can configure any number of job instances, each with their own settings.
Here is a typical example of a DatumDataSourcePollManagedJob
, in a fictional MyDatumDataSource
:
package com.example;\n\nimport java.time.Instant;\nimport java.util.Arrays;\nimport java.util.List;\nimport java.util.Map;\nimport net.solarnetwork.domain.datum.DatumSamples;\nimport net.solarnetwork.node.domain.datum.EnergyDatum;\nimport net.solarnetwork.node.domain.datum.NodeDatum;\nimport net.solarnetwork.node.domain.datum.SimpleEnergyDatum;\nimport net.solarnetwork.node.service.DatumDataSource;\nimport net.solarnetwork.node.service.support.DatumDataSourceSupport;\nimport net.solarnetwork.settings.SettingSpecifier;\nimport net.solarnetwork.settings.SettingSpecifierProvider;\nimport net.solarnetwork.settings.SettingsChangeObserver;\nimport net.solarnetwork.settings.support.BasicTextFieldSettingSpecifier;\n\n/**\n * Super-duper datum data source.\n *\n * @author matt\n * @version 1.0\n */\npublic class MyDatumDataSource extends DatumDataSourceSupport\nimplements DatumDataSource, SettingSpecifierProvider, SettingsChangeObserver {\n\nprivate String sourceId;\nprivate int level;\n\n@Override\npublic Class<? extends NodeDatum> getDatumType() {\nreturn EnergyDatum.class;\n}\n\n@Override\npublic EnergyDatum readCurrentDatum() {\nfinal String sourceId = resolvePlaceholders(this.sourceId);\nif ( sourceId == null || sourceId.isEmpty() ) {\nreturn null;\n}\nSimpleEnergyDatum d = new SimpleEnergyDatum(sourceId, Instant.now(), new DatumSamples());\nd.setWatts(level);\nreturn d;\n}\n\n@Override\npublic void configurationChanged(Map<String, Object> properties) {\n// the settings have changed; do something\n}\n\n@Override\npublic String getSettingUid() {\nreturn \"com.example.MyDatumDataSource\";\n}\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nreturn Arrays.asList(new BasicTextFieldSettingSpecifier(\"sourceId\", null),\nnew BasicTextFieldSettingSpecifier(\"level\", String.valueOf(0)));\n}\n\npublic String getSourceId() {\nreturn sourceId;\n}\n\npublic void setSourceId(String sourceId) {\nthis.sourceId = sourceId;\n}\n\npublic int getLevel() {\nreturn level;\n}\n\npublic void setLevel(int level) {\nthis.level = level;\n}\n\n}\n
title = Super-duper Datum Data Source\ndesc = This managed datum data source does it all.\n\nschedule.key = Schedule\nschedule.desc = The schedule to execute the job at. \\\nCan be either a number representing a frequency in <b>milliseconds</b> \\\nor a <a href=\"{0}\">cron expression</a>, for example <code>0 * * * * *</code>.\n\nsourceId.key = Source ID\nsourceId.desc = The source ID to use.\n\nlevel.key = Level\nlevel.desc = This one goes to 11.\n
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\nxmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\nxmlns:osgix=\"http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium\"\nxmlns:beans=\"http://www.springframework.org/schema/beans\"\nxsi:schemaLocation=\"\n http://www.osgi.org/xmlns/blueprint/v1.0.0\n http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\n http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium\n http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium/gemini-blueprint-compendium.xsd\n http://www.springframework.org/schema/beans\n http://www.springframework.org/schema/beans/spring-beans.xsd\">\n\n<!-- Service References -->\n\n<bean id=\"datumMetadataService\" class=\"net.solarnetwork.common.osgi.service.DynamicServiceTracker\">\n<argument ref=\"bundleContext\"/>\n<property name=\"serviceClassName\" value=\"net.solarnetwork.node.service.DatumMetadataService\"/>\n<property name=\"sticky\" value=\"true\"/>\n</bean>\n\n<bean id=\"datumQueue\" class=\"net.solarnetwork.common.osgi.service.DynamicServiceTracker\">\n<argument ref=\"bundleContext\"/>\n<property name=\"serviceClassName\" value=\"net.solarnetwork.node.service.DatumQueue\"/>\n<property name=\"sticky\" value=\"true\"/>\n</bean>\n\n<bean id=\"placeholderService\" class=\"net.solarnetwork.common.osgi.service.DynamicServiceTracker\">\n<argument ref=\"bundleContext\"/>\n<property name=\"serviceClassName\" value=\"net.solarnetwork.node.service.PlaceholderService\"/>\n<property name=\"sticky\" value=\"true\"/>\n</bean>\n\n<bean id=\"messageSource\" class=\"org.springframework.context.support.ResourceBundleMessageSource\">\n<property name=\"basenames\" value=\"com.example.MyDatumDataSource\"/>\n</bean>\n\n<bean id=\"jobMessageSource\" class=\"net.solarnetwork.support.PrefixedMessageSource\">\n<property name=\"prefix\" value=\"datumDataSource.\"/>\n<property name=\"delegate\" ref=\"messageSource\"/>\n</bean>\n\n<!-- Managed Service Factory for Datum Data Source -->\n\n<service interface=\"net.solarnetwork.settings.SettingSpecifierProviderFactory\">\n<bean class=\"net.solarnetwork.settings.support.BasicSettingSpecifierProviderFactory\">\n<property name=\"displayName\" value=\"Super-duper Datum Data Source\"/>\n<property name=\"factoryUid\" value=\"com.example.MyDatumDataSource\"/><!-- (1)! -->\n<property name=\"messageSource\" ref=\"messageSource\"/>\n</bean>\n</service>\n\n<osgix:managed-service-factory factory-pid=\"com.example.MyDatumDataSource\"\nautowire-on-update=\"true\" update-method=\"configurationChanged\">\n<osgix:interfaces>\n<beans:value>net.solarnetwork.node.job.ManagedJob</beans:value>\n</osgix:interfaces>\n<bean class=\"net.solarnetwork.node.job.SimpleManagedJob\"\ninit-method=\"serviceDidStartup\" destroy-method=\"serviceDidShutdown\">\n<argument>\n<bean class=\"net.solarnetwork.node.job.DatumDataSourcePollManagedJob\">\n<property name=\"datumMetadataService\" ref=\"datumMetadataService\"/>\n<property name=\"datumQueue\" ref=\"datumQueue\"/>\n<property name=\"datumDataSource\">\n<bean class=\"com.example.MyDatumDataSource\"><!-- (2)! -->\n<property name=\"datumMetadataService\" ref=\"datumMetadataService\"/>\n<property name=\"messageSource\" ref=\"jobMessageSource\"/>\n<property name=\"placeholderService\" ref=\"placeholderService\"/>\n</bean>\n</property>\n</bean>\n</argument>\n<argument value=\"0 * * * * ?\"/>\n<property name=\"serviceProviderConfigurations\"><!-- (3)! -->\n<map>\n<entry key=\"datumDataSource\">\n<bean class=\"net.solarnetwork.node.job.SimpleServiceProviderConfiguration\">\n<property name=\"interfaces\">\n<list>\n<value>net.solarnetwork.node.service.DatumDataSource</value>\n</list>\n</property>\n<property name=\"properties\">\n<map>\n<entry key=\"datumClassName\" value=\"net.solarnetwork.node.domain.datum.EnergyDatum\"/>\n</map>\n</property>\n</bean>\n</entry>\n</map>\n</property>\n</bean>\n</osgix:managed-service-factory>\n\n</blueprint>\n
factoryUid
is the same value as the getSettingsUid()
value in MyDatumDataSource.java
ManagedJob
that the Managed Service Factory registers.When this plugin is deployed in SolarNode, the managed component will appear on the main Settings page and then the component settings UI will look like this:
"},{"location":"developers/services/datum-data-source/","title":"Datum Data Source","text":"The DatumDataSource
API defines the primary way for plugins to generate datum instances from devices or services integrated with SolarNode, through a request-based API. The MultiDatumDataSource
API is closely related, and allows a plugin to generate multiple datum when requested.
package net.solarnetwork.node.service;\n\nimport net.solarnetwork.node.domain.datum.NodeDatum;\nimport net.solarnetwork.service.Identifiable;\n\n/**\n * API for collecting {@link NodeDatum} objects from some device.\n */\npublic interface DatumDataSource extends Identifiable, DeviceInfoProvider {\n\n/**\n * Get the class supported by this DataSource.\n *\n * @return class\n */\nClass<? extends NodeDatum> getDatumType();\n\n/**\n * Read the current value from the data source, returning as an unpersisted\n * {@link NodeDatum} object.\n *\n * @return Datum\n */\nNodeDatum readCurrentDatum();\n\n}\n
package net.solarnetwork.node.service;\n\nimport java.util.Collection;\nimport net.solarnetwork.node.domain.datum.NodeDatum;\nimport net.solarnetwork.service.Identifiable;\n\n/**\n * API for collecting multiple {@link NodeDatum} objects from some device.\n */\npublic interface MultiDatumDataSource extends Identifiable, DeviceInfoProvider {\n\n/**\n * Get the class supported by this DataSource.\n *\n * @return class\n */\nClass<? extends NodeDatum> getMultiDatumType();\n\n/**\n * Read multiple values from the data source, returning as a collection of\n * unpersisted {@link NodeDatum} objects.\n *\n * @return Datum\n */\nCollection<NodeDatum> readMultipleDatum();\n\n}\n
The Datum Data Source Poll Job provides a way to let users schedule the polling for datum from a data source.
"},{"location":"developers/services/datum-db/","title":"Datum Database","text":"TODO
"},{"location":"developers/services/datum-queue/","title":"Datum Queue","text":"SolarNode has a DatumQueue
service that acts as a central facility for processing all NodeDatum
captured by all data source plugins deployed in the SolarNode runtime. The queue can be configured with various filters that can augment, modify, or discard the datum. The queue buffers the datum for a short amount of time and then processes them sequentially in order of time, oldest to newest.
Datum data sources that use the Datum Data Source Poll Job are polled for datum on a recurring schedule and those datum are then posted to and stored in SolarNetwork. Data sources can also offer datum directly to the DatumQueue
if they emit datum based on external events. When offering datum directly, the datum can be tagged as transient and they will then still be processed by the queue but will not be posted/stored in SolarNetwork.
/**\n * Offer a new datum to the queue, optionally persisting.\n *\n * @param datum\n * the datum to offer\n * @param persist\n * {@literal true} to persist, or {@literal false} to only pass to\n * consumers\n * @return {@literal true} if the datum was accepted\n */\nboolean offer(NodeDatum datum, boolean persist);\n
"},{"location":"developers/services/datum-queue/#queue-observer","title":"Queue observer","text":"Plugins can also register observers on the DatumQueue
that are notified of each datum that gets processed. The addConsumer()
and removeConsumer()
methods allow you to register/deregister observers:
/**\n * Register a consumer to receive processed datum.\n *\n * @param consumer\n * the consumer to register\n */\nvoid addConsumer(Consumer<NodeDatum> consumer);\n\n/**\n * De-register a previously registered consumer.\n *\n * @param consumer\n * the consumer to remove\n */\nvoid removeConsumer(Consumer<NodeDatum> consumer);\n
Each observer will receive all datum, including transient datum. An example plugin that makes use of this feature is the SolarFlux Upload Service, which posts a copy of each datum to a MQTT server.
Here is a screen shot of the datum queue settings available in the SolarNode UI:
"},{"location":"developers/services/job-scheduler/","title":"Job Scheduler","text":"SolarNode provides a ManagedJobScheduler service that can automatically execute jobs exported by plugins that have user-defined schedules.
The Job Scheduler uses the Task Scheduler
The Job Scheduler service uses the Task Scheduler internally, which means the number of jobs that can execute simultaneously will be limited by its thread pool configuration.
"},{"location":"developers/services/job-scheduler/#managed-jobs","title":"Managed Jobs","text":"Any plugin simply needs to register a ManagedJob service for the Job Scheduler to automatically schedule and execute the job. The schedule is provided by the getSchedle()
method, which can return a cron expression or a plain number representing a millisecond period.
The net.solarnetwork.node.job.SimpleManagedJob
class implements ManagedJob
and can be used in most situations. It delegates the actual work to a net.solarnetwork.node.job.JobService
API, discussed in the next section.
The ManagedJob
API delegates the actual task work to a JobService
API. The executeJobService()
method will be invoked when the job executes.
Let's imagine you have a com.example.Job
class that you would like to allow users to schedule. Your class would implement the JobService
interface, and then you would provide a localized messages properties file and configure the service using OSGi Blueprint.
package com.example;\n\nimport java.util.Collections;\nimport java.util.List;\nimport net.solarnetwork.node.job.JobService;\nimport net.solarnetwork.node.service.support.BaseIdentifiable;\nimport net.solarnetwork.settings.SettingSpecifier;\n\n/**\n * My super-duper job.\n */\npublic class Job exetnds BaseIdentifiable implements JobService {\n@Override\npublic String getSettingUid() {\nreturn \"com.example.job\"; // (1)!\n}\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nreturn Collections.emptyList(); // (2)!\n}\n\n@Override\npublic void executeJobService() throws Exception {\n// do great stuff here!\n}\n}\n
SimpleManagedJob
class we'll configure in Blueprint XML will automatically add a schedule
setting to configure the job schedule.title = Super-duper Job\ndesc = This job does it all.\n\nschedule.key = Schedule\nschedule.desc = The schedule to execute the job at. \\\nCan be either a number representing a frequency in <b>milliseconds</b> \\\nor a <a href=\"{0}\">cron expression</a>, for example <code>0 * * * * *</code>.\n
<service interface=\"net.solarnetwork.node.job.ManagedJob\"><!-- (1)! -->\n<service-properties>\n<entry key=\"service.pid\" value=\"com.example.job\"/>\n</service-properties>\n<bean class=\"net.solarnetwork.node.job.SimpleManagedJob\"><!-- (2)! -->\n<argument>\n<bean class=\"com.example.Job\">\n<property name=\"uid\" value=\"com.example.job\"/><!-- (3)! -->\n<property name=\"messageSource\">\n<bean class=\"org.springframework.context.support.ResourceBundleMessageSource\">\n<property name=\"basenames\" value=\"com.example.Job\"/>\n</bean>\n</property>\n</bean>\n</argument>\n<property name=\"schedule\" value=\"0 * * * * *\"/>\n</bean>\n</service>\n
ManagedJob
service with the SolarNode runtime.SimpleManagedJob
class is a handy ManagedJob
implementation. It adds a schedule
setting to any settings returned by the JobService
.uid
value should match the service.pid
used earlier, which matches the value returned by the getSettingUid()
method in the Job
class.When this plugin is deployed in SolarNode, the component will appear on the main Settings page and offer a configurable Schedule setting, like this:
"},{"location":"developers/services/placeholder-service/","title":"Placeholder Service","text":"The Placeholder Service API provides components a way to resolve variables in strings, known as placeholders, whose values are managed outside the component itself. For example a datum data source plugin could use the Placeholder Service to support resolving placeholders in a configurable Source ID property.
SolarNode provides a Placeholder Service implementation that resolves both dynamic placeholders from the Settings Database (using the setting namespace placeholder
), and static placeholders from a configurable file or directory location.
Call the resolvePlaceholders(s, parameters)
method to resolve all placeholders on the String s
. The parameters
argument can be used to provide additional placeholder values, or you can pass just pass null
to rely solely on the placeholders available in the service already.
Here is an imaginary class that is constructed with an optional PlaceholderService
, and then when the go()
method is called uses that to resolve placeholders in the string {building}/temp
and return the result:
package com.example;\n\nimport net.solarnetwork.node.service.PlaceholderService;\nimport net.solarnetwork.service.OptionalService;\n\npublic class MyComponent {\n\nprivate final OptionalService<PlaceholderService> placeholderService;\n\npublic MyComponent(OptionalService<PlaceholderService> placeholderService) {\nsuper();\nthis.placeholderService = placeholderService;\n}\n\npublic String go() {\nreturn PlaceholderService.resolvePlaceholders(placeholderService,\n\"{building}/temp\", null);\n}\n}\n
"},{"location":"developers/services/placeholder-service/#blueprint","title":"Blueprint","text":"To use the Placeholder Service in your component, add either an Optional Service or explicit reference to your plugin's Blueprint XML file like this (depending on what your plugin requires):
Optional ServiceExplicit Reference<bean id=\"placeholderService\" class=\"net.solarnetwork.common.osgi.service.DynamicServiceTracker\">\n<argument ref=\"bundleContext\"/>\n<property name=\"serviceClassName\" value=\"net.solarnetwork.node.service.PlaceholderService\"/>\n<property name=\"sticky\" value=\"true\"/>\n</bean>\n
<reference id=\"placeholderService\" interface=\"net.solarnetwork.node.service.PlaceholderService\"/>\n
Then inject that service into your component's <bean>
, for example:
<bean id=\"myComponent\" class=\"com.example.MyComponent\">\n<argument ref=\"placeholderService\">\n</bean>\n
"},{"location":"developers/services/placeholder-service/#configuration","title":"Configuration","text":"The Placeholder Service supports the following configuration properties in the net.solarnetwork.node.core
namespace:
placeholders.dir
${CONF_DIR}/placeholders.d Path to a single propertites file or to a directory of properties files to load as static placeholder parameter values when SolarNode starts up."},{"location":"developers/services/settings-db/","title":"Settings Database","text":""},{"location":"developers/services/settings-service/","title":"Settings Service","text":"TODO
"},{"location":"developers/services/sql-database/","title":"SQL Database","text":"The SolarNode runtime provides a local SQL database that is used to hold application settings, data sampled from devices, or anything really. Some data is designed to live only in this local store (such as settings) while other data eventually gets pushed up into the SolarNet cloud. This document describes the most common aspects of the local database.
"},{"location":"developers/services/sql-database/#database-implementation","title":"Database implementation","text":"The database is provided by either the H2 or Apache Derby embedded SQL database engine.
Note
In SolarNodeOS the solarnode-app-db-h2 and solarnode-app-db-derby packages provide the H2 and Derby database implementations. Most modern SolarNode deployments use H2.
Typically the database is configured to run entirely within RAM on devices that support it, and the RAM copy is periodically synced to non-volatile media so if the device restarts the persisted copy of the database can be loaded back into RAM. This pattern works well because:
A standard JDBC stack is available and normal SQL queries are used to access the database. The Hikari JDBC connection pool provides a javax.sql.DataSource
for direct JDBC access. The pool is configured by factory configuration files in the net.solarnetwork.jdbc.pool.hikari
namespace. See the net.solarnetwork.jdbc.pool.hikari-solarnode.cfg as an example.
To make use of the DataSource
from a plugin using OSGi Blueprint you can declare a reference like this:
<reference id=\"dataSource\" interface=\"javax.sql.DataSource\" filter=\"(db=node)\" />\n
The net.solarnetwork.node.dao.jdbc bundle publishes some other JDBC services for plugins to use, such as:
org.springframework.jdbc.core.JdbcOperations
for slightly higher-level JDBC accessorg.springframework.transaction.PlatformTransactionManager
for JDBC transaction supportTo make use of these from a plugin using OSGi Blueprint you can declare references to these APIs like this:
<reference id=\"jdbcOps\" interface=\"org.springframework.jdbc.core.JdbcOperations\"\nfilter=\"(db=node)\" />\n\n<reference id=\"txManager\" interface=\"org.springframework.transaction.PlatformTransactionManager\"\nfilter=\"(db=node)\" />\n
"},{"location":"developers/services/sql-database/#high-level-access-data-access-object-dao","title":"High level access: Data Access Object (DAO)","text":"The SolarNode runtime also provides some Data Access Object (DAO) services that make storing some typical data easier:
net.solarnetwork.node.dao.SettingDao
for access to the Settings Databasenet.solarnetwork.node.dao.DatumDao
for access to the Datum DatabaseTo make use of these from a plugin using OSGi Blueprint you can declare references to these APIs like this:
<reference id=\"settingDao\" interface=\"net.solarnetwork.node.dao.SettingDao\"/>\n\n<reference id=\"datumDao\" interface=\"net.solarnetwork.node.dao.DatumDao\"/>\n
"},{"location":"developers/services/task-executor/","title":"Task Executor","text":"To support asynchronous task execution, SolarNode makes several thread-pool based services available to plugins:
java.util.concurrent.Executor
service for standard Runnable
task executionTaskExecutor
service for Runnable
task executionAsyncTaskExecutor
service for both Runnable
and Callable
task executionAsyncListenableTaskExecutor
service for both Runnable
and Callable
task execution that supports the org.springframework.util.concurrent.ListenableFuture
APINeed to schedule tasks?
See the Task Scheduler page for information on scheduling simple tasks, or the Job Scheduler page for information on scheduling managed jobs.
To make use of any of these services from a plugin using OSGi Blueprint you can declare a reference to them like this:
<reference id=\"taskExecutor\" interface=\"java.util.concurrent.Executor\"\nfilter=\"(function=node)\"/>\n\n<reference id=\"taskExecutor\" interface=\"org.springframework.core.task.TaskExecutor\"\nfilter=\"(function=node)\"/>\n\n<reference id=\"taskExecutor\" interface=\"org.springframework.core.task.AsyncTaskExecutor\"\nfilter=\"(function=node)\"/>\n\n<reference id=\"taskExecutor\" interface=\"org.springframework.core.task.AsyncListenableTaskExecutor\"\nfilter=\"(function=node)\"/>\n
"},{"location":"developers/services/task-executor/#thread-pool-configuration","title":"Thread pool configuration","text":"This thread pool is configured as a fixed-size pool with the number of threads set to the number of CPU cores detected at runtime, plus one. For example on a Raspberry Pi 4 there are 4 CPU cores so the thread pool would be configured with 5 threads.
"},{"location":"developers/services/task-scheduler/","title":"Task Scheduler","text":"To support asynchronous task scheduling, SolarNode provides a Spring TaskScheduler service to plugins.
The Job Scheduler
For user-configurable scheduled tasks, check out the Job Scheduler service.
To make use of any of this service from a plugin using OSGi Blueprint you can declare a reference like this:
<reference id=\"taskScheduler\" interface=\"org.springframework.scheduling.TaskScheduler\"\nfilter=\"(function=node)\"/>\n
"},{"location":"developers/services/task-scheduler/#configuration","title":"Configuration","text":"The Task Scheduler supports the following configuration properties in the net.solarnetwork.node.core
namespace:
jobScheduler.poolSize
10 The number of threads to maintain in the job scheduler, and thus the maximum number of jobs that can run simultaneously. Must be set to 1 or higher. scheduler.startupDelay
180 A delay in seconds after creating the job scheduler to start triggering jobs. This can be useful to give the application time to completely initialize before starting to run jobs. For example, to change the thread pool size to 20 and shorten the startup delay to 30 seconds, create an /etc/solarnode/services/net.solarnetwork.node.core.cfg
file with the following content:
jobScheduler.poolSize = 20\nscheduler.startupDelay = 30\n
"},{"location":"developers/settings/","title":"Settings","text":"SolarNode provides a way for plugin components to describe their user-configurable properties, called settings, to the platform. SolarNode provides a web-based UI that makes it easy for users to configure those components using a web browser. For example, here is a screen shot of the SolarNode UI showing a form for the settings of a Database Backup component:
The mechanism for components to describe themselves in this way is called the Settings API. Classes that wish to participate in this system publish metadata about their configurable properties through the Settings Provider API, and then SolarNode generates a UI form based on that metadata. Each form field in the previous example image is a Setting Specifier.
The process is similar to the built-in Settings app on iOS: iOS applications can publish configurable property definitions and the Settings app displays a UI that allows users to modify those properties.
"},{"location":"developers/settings/factory/","title":"Factory Service","text":""},{"location":"developers/settings/provider/","title":"Settings Provider","text":"The net.solarnetwork.settings.SettingSpecifierProvider
interface defines the way a class can declare themselves as a user-configurable component. The main elements of this API are:
public interface SettingSpecifierProvider {\n\n/**\n * Get a unique, application-wide setting ID.\n *\n * @return unique ID\n */\nString getSettingUid();\n\n/**\n * Get a non-localized display name.\n *\n * @return non-localized display name\n */\nString getDisplayName();\n\n/**\n * Get a list of {@link SettingSpecifier} instances.\n *\n * @return list of {@link SettingSpecifier}\n */\nList<SettingSpecifier> getSettingSpecifiers();\n\n}\n
The getSettingUid()
method defines a unique ID for the configurable component. By convention the class or package name of the component (or a derivative of it) is used as the ID.
The getSettingSpecifiers()
method returns a list of all the configurable properties of the component, as a list of Setting Specifier instances.
s
@Override\nprivate String username;\n\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>(1);\n\n// expose a \"username\" setting with a default value of \"admin\"\nresults.add(new BasicTextFieldSettingSpecifier(\"username\", \"admin\"));\n\nreturn results;\n}\n\n// settings are updated at runtime via standard setter methods\npublic void setUsername(String username) {\nthis.username = username;\n}\n
Setting values are treated as strings within the Settings API, but the methods associated with settings can accept any primitive or standard number type like int
or Integer
as well.
@Override\nprivate BigDecimal num;\n\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>(1);\n\nresults.add(new BasicTextFieldSettingSpecifier(\"num\", null));\n\nreturn results;\n}\n\n// settings will be coerced from strings into basic types automatically\npublic void setNum(BigDecimal num) {\nthis.num = num;\n}\n
"},{"location":"developers/settings/provider/#proxy-setting-accessors","title":"Proxy setting accessors","text":"Sometimes you might like to expose a simple string setting but internally treat the string as a more complex type. For example a Map
could be configured using a simple delimited string like key1 = val1, key2 = val2
. For situations like this you can publish a proxy setting that manages a complex data type as a string, and en/decode the complex type in your component accessor methods.
@Override\nprivate Map<String, String> map;\n\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>(1);\n\n// expose a \"mapping\" proxy setting for the map field\nresults.add(new BasicTextFieldSettingSpecifier(\"mapping\", null));\n\nreturn results;\n}\n\npublic void setMapping(String mapping) {\nthis.map = StringUtils.commaDelimitedStringToMap(mapping);\n}\n
"},{"location":"developers/settings/resource-handler/","title":"Setting Resource Handler","text":"The net.solarnetwork.node.settings.SettingResourceHandler
API defines a way for a component to import and export files uploaded to SolarNode from external sources.
A component could support importing a file using the File setting. This could be used, to provide a way of configuring the component from a configuration file, like CSV, JSON, XML, and so on. Similarly a component could support exporting a file, to generate a configuration file in another format like CSV, JSON, XML, and so on, from its current settings. For example, the Modbus Device Datum Source does exactly these things: importing and exporting a custom CSV file to make configuring the component easier.
"},{"location":"developers/settings/resource-handler/#importing","title":"Importing","text":"The main part of the SettingResourceHandler
API for importing files looks like this:
public interface SettingResourceHandler {\n\n/**\n * Get a unique, application-wide setting ID.\n *\n * <p>\n * This ID must be unique across all setting resource handlers registered\n * within the system. Generally the implementation will also be a\n * {@link net.solarnetwork.settings.SettingSpecifierProvider} for the same\n * ID.\n * </p>\n *\n * @return unique ID\n */\nString getSettingUid();\n\n/**\n * Apply settings for a specific key from a resource.\n *\n * @param settingKey\n * the setting key, generally a\n * {@link net.solarnetwork.settings.KeyedSettingSpecifier#getKey()}\n * value\n * @param resources\n * the resources with the settings to apply\n * @return any setting values that should be persisted as a result of\n * applying the given resources (never {@literal null}\n * @throws IOException\n * if any IO error occurs\n */\nSettingsUpdates applySettingResources(String settingKey, Iterable<Resource> resources)\nthrows IOException;\n
The getSettingUid()
method overlaps with the Settings Provider API, and as the comments note it is typical for a Settings Provider that publishes settings like File or Text Area to also implement SettingResourceHandler
.
The settingKey
passed to the applySettingResources()
method identifies the resource(s) being uploaded, as a single Setting Resource Handler might support multiple resources. For example a Settings Provider might publish multiple File settings, or File and Text Area settings. The settingKey
is used to differentiate between each one.
Imagine a component that publishes a File setting. A typical implementation of that component would look like this (this example omits some methods for brevity):
public class MyComponent implements SettingSpecifierProvider,\nSettingResourceHandler {\n\nprivate static final Logger log\n= LoggerFactory.getLogger(MyComponent.class);\n\n/** The resource key to identify the File setting resource. */\npublic static final String RESOURCE_KEY_DOCUMENT = \"document\";\n\n@Override\npublic String getSettingUid() {\nreturn \"com.example.mycomponent\";\n}\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>();\n\n// publish a File setting tied to the RESOURCE_KEY_DOCUMENT key,\n// allowing only text files to be accepted\nresults.add(new BasicFileSettingSpecifier(RESOURCE_KEY_DOCUMENT, null,\nnew LinkedHashSet<>(asList(\".txt\", \"text/*\")), false));\n\nreturn results;\n}\n\n@Override\npublic SettingsUpdates applySettingResources(String settingKey,\nIterable<Resource> resources) throws IOException {\nif ( resources == null ) {\nreturn null;\n}\nif ( RESOURCE_KEY_DOCUMENT.equals(settingKey) ) {\nfor ( Resource r : resources ) {\n// here we would do something useful with the resource... like\n// read into a string and log it\nString s = FileCopyUtils.copyToString(new InputStreamReader(\nr.getInputStream(), StandardCharsets.UTF_8));\n\nlog.info(\"Got {} resource content: {}\", settingKey, s);\n\nbreak; // only accept one file\n}\n}\nreturn null;\n}\n\n}\n
"},{"location":"developers/settings/resource-handler/#exporting","title":"Exporting","text":"The part of the Setting Resource Handler API that supports exporting setting resources looks like this:
/**\n * Get a list of supported setting keys for the\n * {@link #currentSettingResources(String)} method.\n *\n * @return the set of supported keys\n */\ndefault Collection<String> supportedCurrentResourceSettingKeys() {\nreturn Collections.emptyList();\n}\n\n/**\n * Get the current setting resources for a specific key.\n *\n * @param settingKey\n * the setting key, generally a\n * {@link net.solarnetwork.settings.KeyedSettingSpecifier#getKey()}\n * value\n * @return the resources, never {@literal null}\n */\nIterable<Resource> currentSettingResources(String settingKey);\n
The supportedCurrentResourceSettingKeys()
method returns a set of resource keys the component supports for exporting. The currentSettingResources()
method returns the resources to export for a given key.
The SolarNode UI shows a form menu with all the available resources for all components that support the SettingResourceHandler
API, and lets the user to download them:
Here is an example of a component that supports exporting a CSV file resource based on the component's current configuration:
public class MyComponent implements SettingSpecifierProvider,\nSettingResourceHandler {\n\n/** The setting resource key for a CSV configuration file. */\npublic static final String RESOURCE_KEY_CSV_CONFIG = \"csvConfig\";\n\nprivate int max = 1;\nprivate boolean enabled = true;\n\n@Override\npublic Collection<String> supportedCurrentResourceSettingKeys() {\nreturn Collections.singletonList(RESOURCE_KEY_CSV_CONFIG);\n}\n\n@Override\npublic Iterable<Resource> currentSettingResources(String settingKey) {\nif ( !RESOURCE_KEY_CSV_CONFIG.equals(settingKey) ) {\nreturn null;\n}\n\nStringBuilder buf = new StringBuilder();\nbuf.append(\"max,enabled\\r\\n\");\nbuf.append(max).append(',').append(enabled).append(\"\\r\\n\");\n\nreturn Collections.singleton(new ByteArrayResource(\nbuf.toString().getBytes(UTF_8), \"My Component CSV Config\") {\n\n@Override\npublic String getFilename() {\nreturn \"my-component-config.csv\";\n}\n\n});\n}\n}\n
"},{"location":"developers/settings/singleton/","title":"Singleton Service","text":""},{"location":"developers/settings/specifier/","title":"Setting Specifier","text":"The net.solarnetwork.settings.SettingSpecifier
API defines metadata for a single configurable property in the Settings API. The API looks like this:
public interface SettingSpecifier {\n\n/**\n * A unique identifier for the type of setting specifier this represents.\n *\n * <p>\n * Generally this will be a fully-qualified interface name.\n * </p>\n *\n * @return the type\n */\nString getType();\n\n/**\n * Localizable text to display with the setting's content.\n *\n * @return the title\n */\nString getTitle();\n\n}\n
This interface is very simple, and extended by more specialized interfaces that form more useful setting types.
Note
A SettingSpecifier
instance is often referred to simply as a setting.
Here is a view of the class hierarchy that builds off of this interface:
Note
The SettingSpecifier
API defines metadata about a configurable property, but not methods to view or change that property's value. The Settings Service provides methods for managing setting values.
The Settings Playpen plugin demonstrates most of the available setting types, and is a great way to see how the settings can be used.
"},{"location":"developers/settings/specifier/#text-field","title":"Text Field","text":"The TextFieldSettingSpecifier
defines a simple string-based configurable property and is the most common setting type. The setting defines a key
that maps to a setter method on its associated component class. In the SolarNode UI a text field is rendered as an HTML form text input, like this:
The net.solarnetwork.settings.support.BasicTextFieldSettingSpecifier
class provides the standard implementation of this API. A standard text field setting is created like this:
new BasicTextFieldSettingSpecifier(\"myProperty\", \"DEFAULT_VALUE\");\n\n// or without any default value\nnew BasicTextFieldSettingSpecifier(\"myProperty\", null);\n
Tip
Setting values are generally treated as strings within the Settings API, however other basic data types such as integers and numbers can be used as well. You can also publish a \"proxy\" setting that manages a complex data type as a string, and en/decode the complex type in your component accessor methods.
For example a Map<String, String>
setting could be published as a text field setting that en/decodes the Map
into a delimited string value, for example name=Test, color=red
.
The BasicTextFieldSettingSpecifier
can also be used for \"secure\" text fields where the field's content is obscured from view. In the SolarNode UI a secure text field is rendered as an HTML password form input like this:
A standard secure text field setting is created by passing a third true
argument, like this:
new BasicTextFieldSettingSpecifier(\"myProperty\", \"DEFAULT_VALUE\", true);\n\n// or without any default value\nnew BasicTextFieldSettingSpecifier(\"myProperty\", null, true);\n
"},{"location":"developers/settings/specifier/#title","title":"Title","text":"The TitleSettingSpecifier
defines a simple read-only string-based configurable property. The setting defines a key
that maps to a setter method on its associated component class. In the SolarNode UI the default value is rendered as plain text, like this:
The net.solarnetwork.settings.support.BasicTitleSettingSpecifier
class provides the standard implementation of this API. A standard title setting is created like this:
new BasicTitleSettingSpecifier(\"status\", \"Status is good.\", true);\n
"},{"location":"developers/settings/specifier/#html-title","title":"HTML Title","text":"The TitleSettingSpecifier
supports HTML markup. In the SolarNode UI the default value is rendered directly into HTML, like this:
// pass `true` as the 4th argument to enable HTML markup in the status value\nnew BasicTitleSettingSpecifier(\"status\", \"Status is <b>good</b>.\", true, true);\n
"},{"location":"developers/settings/specifier/#text-area","title":"Text Area","text":"The TextAreaSettingSpecifier
defines a simple string-based configurable property for a larger text value, loaded as an external file using the SettingResourceHandler API. In the SolarNode UI a text area is rendered as an HTML form text area with an associated button to upload the content, like this:
The net.solarnetwork.settings.support.BasicTextAreaSettingSpecifier
class provides the standard implementation of this API. A standard text field setting is created like this:
new BasicTextAreaSettingSpecifier(\"myProperty\", \"DEFAULT_VALUE\");\n\n// or without any default value\nnew BasicTextAreaSettingSpecifier(\"myProperty\", null);\n
"},{"location":"developers/settings/specifier/#direct-text-area","title":"Direct Text Area","text":"The BasicTextAreaSettingSpecifier
can also be used for \"direct\" text areas where the field's content is not uploaded as an external file. In the SolarNode UI a direct text area is rendered as an HTML form text area, like this:
A standard direct text area setting is created by passing a third true
argument, like this:
new BasicTextAreaSettingSpecifier(\"myProperty\", \"DEFAULT_VALUE\", true);\n\n// or without any default value\nnew BasicTextAreaSettingSpecifier(\"myProperty\", null, true);\n
"},{"location":"developers/settings/specifier/#toggle","title":"Toggle","text":"The ToggleSettingSpecifier
defines a boolean configurable property. In the SolarNode UI a toggle setting is rendered as an HTML form button, like this:
The net.solarnetwork.settings.support.BasicToggleSettingSpecifier
class provides the standard implementation of this API. A standard toggle setting is created like this:
new BasicToggleSettingSpecifier(\"enabled\", false); // default \"off\"\n\nnew BasicToggleSettingSpecifier(\"enabled\", true); // default \"on\"\n
"},{"location":"developers/settings/specifier/#slider","title":"Slider","text":"The SliderSettingSpecifier
defines a number-based configuration property with minimum and maximum values enforced, and a step limit. In the SolarNode UI a slider is rendered as an HTML widget, like this:
The net.solarnetwork.settings.support.BasicSliderSettingSpecifier
class provides the standard implementation of this API. A standard Slider setting is created like this:
// no default value, range between 0-11 in 0.5 increments\nnew BasicSliderSettingSpecifier(\"volume\", null, 0.0, 11.0, 0.5);\n\n// default value 5.0, range between 0-11 in 0.5 increments\nnew BasicSliderSettingSpecifier(\"volume\", 5.0, 0.0, 11.0, 0.5);\n
"},{"location":"developers/settings/specifier/#radio-group","title":"Radio Group","text":"The RadioGroupSettingSpecifier
defines a configurable property that accepts a single value from a fixed set of possible values. In the SolarNode UI a radio group is rendered as a set of HTML radio input form fields, like this:
The net.solarnetwork.settings.support.BasicRadioGroupSettingSpecifier
class provides the standard implementation of this API. A standard RadioGroup setting is created like this:
String[] vals = new String[] {\"a\", \"b\", \"c\"};\nString[] labels = new Strign[] {\"One\", \"Two\", \"Three\"};\nMap<String, String> radioValues = new LinkedHashMap<>(3);\nfor ( int i = 0; i < vals.length; i++ ) {\nradioValues.put(vals[i], labels[i]);\n}\nBasicRadioGroupSettingSpecifier radio =\nnew BasicRadioGroupSettingSpecifier(\"option\", vals[0]);\nradio.setValueTitles(radioValues);\n
"},{"location":"developers/settings/specifier/#multi-value","title":"Multi-value","text":"The MultiValueSettingSpecifier
defines a configurable property that accepts a single value from a fixed set of possible values. In the SolarNode UI a multi-value setting is rendered as an HTML select form field, like this:
The net.solarnetwork.settings.support.BasicMultiValueSettingSpecifier
class provides the standard implementation of this API. A standard MultiValue setting is created like this:
String[] vals = new String[] {\"a\", \"b\", \"c\"};\nString[] labels = new Strign[] {\"Option 1\", \"Option 2\", \"Option 3\"};\nMap<String, String> radioValues = new LinkedHashMap<>(3);\nfor ( int i = 0; i < vals.length; i++ ) {\nradioValues.put(vals[i], labels[i]);\n}\nBasicMultiValueSettingSpecifier menu = new BasicMultiValueSettingSpecifier(\"option\",\nvals[0]);\nmenu.setValueTitles(menuValues);\n
"},{"location":"developers/settings/specifier/#file","title":"File","text":"The FileSettingSpecifier
defines a file-based resource property, loaded as an external file using the SettingResourceHandler API. In the SolarNode UI a file setting is rendered as an HTML file input, like this:
The net.solarnetwork.node.settings.support.BasicFileSettingSpecifier
class provides the standard implementation of this API. A standard file setting is created like this:
// a single file only, no default content\nnew BasicFileSettingSpecifier(\"document\", null,\nnew LinkedHashSet<>(Arrays.asList(\".txt\", \"text/*\")), false);\n\n// multiple files allowed, no default content\nnew BasicFileSettingSpecifier(\"document-list\", null,\nnew LinkedHashSet<>(Arrays.asList(\".txt\", \"text/*\")), true);\n
"},{"location":"developers/settings/specifier/#dynamic-list","title":"Dynamic List","text":"A Dynamic List setting allows the user to manage a list of homogeneous items, adding or subtracting items as desired. The items can be literals like strings, or arbitrary objects that define their own settings. In the SolarNode UI a dynamic list setting is rendered as a pair of HTML buttons to remove and add items, like this:
A Dynamic List is often backed by a Java Collection
or array in the associated component. In addition a special size-adjusting accessor method is required, named after the setter method with Count
appended. SolarNode will use this accessor to request a specific size for the dynamic list.
private String[] names = new String[0];\n\npublic String[] getNames() {\nreturn names;\n}\n\npublic void setNames(String[] names) {\nthis.names = names;\n}\n\npublic int getNamesCount() {\nString[] l = getNames();\nreturn (l == null ? 0 : l.length);\n}\n\npublic void setNamesCount(int count) {\nsetNames(ArrayUtils.arrayOfLength(\ngetNames(), count, String.class, String::new));\n}\n
private List<String> names = new ArrayList<>();\n\npublic List<String> getNames() {\nreturn names;\n}\n\npublic void setNames(List<String> names) {\nthis.names = names;\n}\n\npublic int getNamesCount() {\nList<String> l = getNames();\nreturn (l == null ? 0 : l.size());\n}\n\npublic void setNamesCount(int count) {\nif ( count < 0 ) {\ncount = 0;\n}\nList<String> l = getNames();\nint lCount = (l == null ? 0 : l.size());\nwhile ( lCount > count ) {\nl.remove(l.size() - 1);\nlCount--;\n}\nif ( l == null && count > 0 ) {\nl = new ArrayList<>();\nsetNames(l);\n}\nwhile ( lCount < count ) {\nl.add(\"\");\nlCount++;\n}\n}\n
The SettingUtils.dynamicListSettingSpecifier()
method simplifies the creation of a GroupSettingSpecifier
that represents a dynamic list (the examples in the following sections demonstrate this).
A simple Dynamic List is a dynamic list of string or number values.
private String[] names = new String[0];\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>();\n\n// turn a list of strings into a Group of TextField settings\nGroupSettingSpecifier namesList = SettingUtils.dynamicListSettingSpecifier(\n\"names\", asList(names), (String value, int index, String key) ->\nsingletonList(new BasicTextFieldSettingSpecifier(key, null)));\nresults.add(namesList);\n\nreturn results;\n}\n
"},{"location":"developers/settings/specifier/#complex-dynamic-list","title":"Complex Dynamic List","text":"A complex Dynamic List is a dynamic list of arbitrary object values. The main difference in terms of the necessary settings structure required, compared to a Simple Dynamic List, is that a group-of-groups is used.
Complex data classDynamic List settingpublic class Person {\nprivate String firstName;\nprivate String lastName;\n\n// generate list of settings for a Person, nested under some prefix\npublic List<SettingSpecifier> settings(String prefix) {\nList<SettingSpecifier> results = new ArrayList<>(2);\nresults.add(new BasicTextFieldSettingSpecifier(prefix + \"firstName\", null));\nresults.add(new BasicTextFieldSettingSpecifier(prefix + \"lastName\", null));\nreturn results;\n}\n\npublic void setFirstName(String firstName) {\nthis.firstName = firstName;\n}\n\npublic void setLastName(String lastName) {\nthis.lastName = lastName;\n}\n}\n
private Person[] people = new Person[0];\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>();\n\n// turn a list of People into a Group of Group settings\nGroupSettingSpecifier peopleList = SettingUtils.dynamicListSettingSpecifier(\n\"people\", asList(people), (Person value, int index, String key) ->\nsingletonList(new BasicGroupSettingSpecifier(\nvalue.settings(key + \".\"))));\nresults.add(peopleList);\n\nreturn results;\n}\n
"},{"location":"users/","title":"User Guide","text":"This section of the handbook is geared towards users who will be deploying and managing one or more SolarNode devices.
See the Getting Started page to learn how to:
See the Setup App section to learn how to configure SolarNode.
"},{"location":"users/configuration/","title":"Configuration","text":"Some SolarNode components can be configured from properties files. This type of configuration is meant to be changed just once, when a SolarNode is first deployed, to alter some default configuration value.
Not to be confused with Settings
This type of configuration differs from what the Settings page in the Setup App provides a UI managing. This configuration might be created by system administrators when creating a custom SolarNodeOS image for their needs, while Settings are meant to be managed by end users.
Configuration properties files are read from the /etc/solarnode/services
directory and named like NAMESPACE.cfg
, where NAMESPACE
represents a configuration namespace.
Configuration location
The /etc/solarnode/services
location is the default location in SolarNodeOS. It might be another location in other SolarNode deployments.
Imagine a component uses the configuration namespace com.example.service
and supports a configurable property named max-threads
that accepts an integer value you would like to configure as 4
. You would create a com.example.service.cfg
file like:
max-threads = 4\n
"},{"location":"users/datum/","title":"Datum","text":"In SolarNetwork a datum is the fundamental time-stamped data structure collected by SolarNodes and stored in SolarNet. It is a collection of properties associated with a specific information source at a specific time.
Example plain language description of a datum
the temperature and humidity collected from my weather station at 1 Jan 2023 11:00 UTC
In this example datum description, we have all the comopnents of a datum:
Datum component Description node the (implied) node that collected the data properties temperature and humidity source my weather station time 1 Jan 2023 11:00 UTCA datum stream is the collection of datum from a single node for a single source over time.
A datum object is modeled as a flexible structure with the following core elements:
Element Type DescriptionnodeId
number A unique ID assigned to nodes by SolarNetwork. sourceId
string A node-unique identifier that defines a single stream of data from a specific source, up to 64 characters long. Certain characters are not allowed, see below. created
date A time stamp of when the datum was collected, or the date the datum is associated with. samples
datum samples The collected properties. A datum is uniquely identified by the three combined properties (nodeId
, sourceId
, created
).
Source IDs are user-defined strings used to distinguish between different information sources within a single node. For example, a node might collect data from an energy meter on source ID Meter
and a solar inverter on Solar
. SolarNetwork does not place any restrictions on source ID values, other than a 64-character limit. However, there is are some conventions used within SolarNetwork that are useful to follow, especially for larger deployment of nodes with many source IDs:
Meter1
is better than Schneider ION6200 Meter - Main Building
./S1/B1/M1
could imply the first meter in the first building on the first site.+
and #
characters should not be used. This is actually a constraint in the MQTT protocol used in parts of SolarNetwork, where the MQTT topic name includes the source ID. These characters are MQTT topic filter wildcards, and cannot be used in topic names.The path-like structure becomes useful in places where wildcard patterns are used, like security policies or datum queries. It is generally worthwhile spending some time planning on a source ID taxonomy to use when starting a new project with SolarNetwork.
"},{"location":"users/datum/#datum-samples","title":"Datum samples","text":"The properties included in a datum object are known as datum samples. The samples are modeled as a collection of named properties, for example the temperature and humidity properties in the earlier example datum could be represented like this:
Example representation of datum samples from a weather station source{\n\"temperature\" : 21.5,\n\"humidity\" : 68\n}\n
Another datum samples acquired from a power meter might look like this:
Example representation of datum samples from a power meter source{\n\"watts\" : 2150,\n\"wattHours\" : 6834834349,\n\"mode\" : \"auto\"\n}\n
"},{"location":"users/datum/#datum-property-classifications","title":"Datum property classifications","text":"The datum samples are actually further organized into three classifications:
Classification Key Description instantaneousi
a single reading, observation, or measurement that does not accumulate over time accumulating a
a reading that accumulates over time, like a meter or odometer status s
non-numeric data, like staus codes or error messages These classifications help SolarNetwork understand how to aggregate the datum samples over time. When SolarNode uploads a datum to SolarNetwork, the sample will include the classification of each property. The previous example would thus more accurately be represented like this:
Example representation of datum samples with classifications{\n\"i\": {\n\"watts\" : 2150 // (1)!\n},\n\"a\": {\n\"wattHours\" : 6834834349 // (2)!\n},\n\"s\": {\n\"mode\" : \"auto\" // (3)!\n}\n}\n
watts
is an instantaneous measurement of power that does not accumulatewattHours
is an accumulating measurement of the accrual of energy over timemode
is a status message that is not a numberNote
Sometimes these classifications will be hidden from you. For example SolarNetwork hides them when returning datum data from some SolarNetwork API methods. You might come across them in some SolarNode plugins that allow configuring dynamic sample properties to collect, when SolarNode does not implicitly know which classification to use. Some SolarNetwork APIs do return or require fully classified sample objects; the documentation for those services will make that clear.
"},{"location":"users/expressions/","title":"Expressions","text":"Many SolarNode components support a general \"expressions\" framework that can be used to calculate values using a scripting language. SolarNode comes with the Spel scripting language by default, so this guide describes that language.
A common use case for expressions is to derive datum property values out of the raw property values captured from a device. In the SolarNode Setup App a typical datum data source component might present a configurable list of expression settings like this:
In this example, each time the data source captures a datum from the device it is communicating with it will add a new watts
property by multiplying the captured amps
and volts
property values. In essence the expression is like this code:
watts = amps \u00d7 volts\n
"},{"location":"users/expressions/#datum-expressions","title":"Datum Expressions","text":"Many SolarNode expressions are evaluated in the context of a datum, typically one captured from a device SolarNode is collecting data from. In this context, the expression supports accessing datum properties directly as expression variables, and some helpful functions are provided.
"},{"location":"users/expressions/#datum-property-variables","title":"Datum property variables","text":"All datum properties with simple names can be referred to directly as variables. Here simple just means a name that is also a legal variable name. The property classifications do not matter in this context: the expression will look for properties in all classifications.
For example, given a datum like this:
Example datum representation in JSON{\n\"i\": {\n\"watts\" : 123\n},\n\"a\": {\n\"wattHours\" : 987654321\n},\n\"s\": {\n\"mode\" : \"auto\"\n}\n}\n
The expression can use the variables watts
, wattHours
, and mode
.
A datum expression will also provide the following variables:
Property Type Descriptiondatum
Datum
A Datum
object, in case you need direct access to the functions provided there. meta
DatumMetadataOperations
Get datum metadata for the current source ID. parameters
Map<String,Object>
Simple map-based access to all parameters passed to the expression. The available parameters depend on the context of the expression evaluation, but often include things like placeholder values or parameters generated by previously evaluated expressions. These values are also available directly as variables, this is rarely needed but can be helpful for accessing dynamically-calculated property names or properties with names that are not legal variable names. props
Map<String,Object>
Simple map based access to all properties in datum
. As datum properties are also available directly as variables, this is rarely needed but can be helpful for accessing dynamically-calculated property names or properties with names that are not legal variable names. sourceId
String
The source ID of the current datum."},{"location":"users/expressions/#functions","title":"Functions","text":"Some functions are provided to help with datum-related expressions.
"},{"location":"users/expressions/#bit-functions","title":"Bit functions","text":"The following functions help with bitwise integer manipulation operations:
Function Arguments Result Descriptionand(n1,n2)
Number
, Number
Number
Bitwise and, i.e. (n1 & n2)
andNot(n1,n2)
Number
, Number
Number
Bitwise and-not, i.e. (n1 & ~n2)
narrow(n,s)
Number
, Number
Number
Return n
as a reduced-size but equivalent number of a minimum power-of-two byte size s
narrow8(n)
Number
Number
Return n
as a reduced-size but equivalent number narrowed to a minimum of 8-bits narrow16(n)
Number
Number
Return n
as a reduced-size but equivalent number narrowed to a minimum of 16-bits narrow32(n)
Number
Number
Return n
as a reduced-size but equivalent number narrowed to a minimum of 32-bits narrow64(n)
Number
Number
Return n
as a reduced-size but equivalent number narrowed to a minimum of 64-bits not(n)
Number
Number
Bitwise not, i.e. (~n)
or(n1,n2)
Number
, Number
Number
Bitwise or, i.e. (n1 | n2)
shiftLeft(n,c)
Number
, Number
Number
Bitwise shift left, i.e. (n << c)
shiftRight(n,c)
Number
, Number
Number
Bitwise shift left, i.e. (n >> c)
testBit(n,i)
Number
, Number
boolean
Test if bit i
is set in integer n
, i.e. ((n & (1 << i)) != 0)
xor(n1,n2)
Number
, Number
Number
Bitwise xor, i.e. (n1 ^ n2)
Tip
All number arguments will be converted to BigInteger
values for the bitwise operations, and BigInteger
values are returned.
The following functions deal with datum streams. The latest()
and offset()
functions give you access to recently-captured datum from any SolarNode source, so you can refer to any datum stream being generated in SolarNode. They return another datum expression root object, which means you have access to all the variables and functions documented on this page with them as well.
hasLatest(source)
String
boolean
Returns true
if a datum with source ID source
is available via the latest(source)
function. hasLatestMatching(pattern)
String
Collection<DatumExpressionRoot>
Returns true
if latestMatching(pattern)
returns a non-empty collection. hasLatestOtherMatching(pattern)
String
Collection<DatumExpressionRoot>
Returns true
if latestOthersMatching(pattern)
returns a non-empty collection. hasMeta()
boolean
Returns true
if metadata for the current source ID is available. hasMeta(source)
String
boolean
Returns true
if datumMeta(source)
would return a non-null value. hasOffset(offset)
int
boolean
Returns true
if a datum is available via the offset(offset)
function. hasOffset(source,offset)
String
, int
boolean
Returns true
if a datum with source ID source
is available via the offset(source,int)
function. latest(source)
String
DatumExpressionRoot
Provides access to the latest available datum matching the given source ID, or null
if not available. This is a shortcut for calling offset(source,0)
. latestMatching(pattern)
String
Collection<DatumExpressionRoot>
Return a collection of the latest available datum matching a given source ID wildcard pattern. latestOthersMatching(pattern)
String
Collection<DatumExpressionRoot>
Return a collection of the latest available datum matching a given source ID wildcard pattern, excluding the current datum if its source ID happens to match the pattern. meta(source)
String
DatumMetadataOperations
Get datum metadata for a specific source ID. metaMatching(pattern)
String
Collection<DatumMetadataOperations>
Find datum metadata for sources matching a given source ID wildcard pattern. offset(offset)
int
DatumExpressionRoot
Provides access to a datum from the same stream as the current datum, offset by offset
in time, or null
if not available. Offset 1
means the datum just before this datum, and so on. offset(source,offset)
String
, int
DatumExpressionRoot
Provides access to an offset from the latest available datum matching the given source ID, or null
if not available. Offset 0
represents the \"latest\" datum, 1
the one before that, and so on. SolarNode only maintains a limited history for each source, do do not rely on more than a few datum to be available via this method. This history is also cleared when SolarNode restarts. selfAndLatestMatching(pattern)
String
Collection<DatumExpressionRoot>
Return a collection of the latest available datum matching a given source ID wildcard pattern, including the current datum.\u00a0The current datum will always be the first datum returned."},{"location":"users/expressions/#math-functions","title":"Math functions","text":"Expressions support basic math operators like +
for addition and *
for multiplication. The following functions help with other math operations:
avg(collection)
Collection<Number>
Number
Calculate the average (mean) of a collection of numbers. Useful when combined with the group(pattern)
function. ceil(n)
Number
Number
Round a number larger, to the nearest integer. ceil(n,significance)
Number
, Number
Number
Round a number larger, to the nearest integer multiple of significance
. down(n)
Number
Number
Round numbers towards zero, to the nearest integer. down(n,significance)
Number
, Number
Number
Round numbers towards zero, to the nearest integer multiple of significance
. floor(n)
Number
Number
Round a number smaller, to the nearest integer. floor(n,significance)
Number
, Number
Number
Round a number smaller, to the nearest integer multiple of significance
. max(collection)
Collection<Number>
Number
Return the largest value from a set of numbers. max(n1,n2)
Number
, Number
Number
Return the larger of two numbers. min(collection)
Collection<Number>
Number
Return the smallest value from a set of numbers. min(n1,n2)
Number
, Number
Number
Return the smaler of two numbers. mround(n,significance)
Number
, Number
Number
Round a number to the nearest integer multiple of significance
. round(n)
Number
Number
Round a number to the nearest integer. round(n,digits)
Number
, Number
Number
Round a number to the nearest number with digits
decimal digits. roundDown(n,digits)
Number
, Number
Number
Round a number towards zero to the nearest number with digits
decimal digits. roundUp(n,digits)
Number
, Number
Number
Round a number away from zero to the nearest number with digits
decimal digits. sum(collection)
Collection<Number>
Number
Calculate the sum of a collection of numbers. Useful when combined with the group(pattern)
function. up(n)
Number
Number
Round numbers away from zero, to the nearest integer. up(n,significance)
Number
, Number
Number
Round numbers away from zero, to the nearest integer multiple of significance
."},{"location":"users/expressions/#node-metadata-functions","title":"Node metadata functions","text":"All the Datum Metadata functions like metadataAtPath(path)
can be invoked directly, operating on the node's own metadata instead of a datum stream's metadata.
The following functions deal with general SolarNode operations:
Function Arguments Result DescriptionisOpMode(mode)
String
boolean
Returns true
if the mode
operational mode is active."},{"location":"users/expressions/#property-functions","title":"Property functions","text":"The following functions help with expression properties (variables):
Function Arguments Result Descriptionhas(name)
String
boolean
Returns true
if a property named name
is defined. Can be used to prevent expression errors on datum property variables that are missing. group(pattern)
String
Collection<Number>
Creates a collection out of numbered properties whose name
matches the given regular expression pattern
."},{"location":"users/expressions/#expression-examples","title":"Expression examples","text":"Let's assume a captured datum like this, expressed as JSON:
{\n\"i\" : {\n\"amps\" : 4.2,\n\"volts\" : 240.0\n},\n\"a\" : {\n\"reading\" : 38009138\n},\n\"s\" : {\n\"state\" : \"Ok\"\n}\n}\n
Then here are some example Spel expressions and the results they would produce:
Expression Result Commentstate
Ok
Returns the state
status property directly, which is Ok
. datum.s['state']
Ok
Returns the state
status property explicitly. props['state']
Ok
Same result as datum.s['state']
but using the short-cut props
accessor. amps * volts
1008.0
Returns the result of multiplying the amps
and volts
properties together: 4.2 \u00d7 240.0 = 1008.0
."},{"location":"users/expressions/#datum-stream-history","title":"Datum stream history","text":"Building on the previous example datum, let's assume an earlier datum for the same source ID had been collected with these properties (the classifications have been omitted for brevity):
{\n\"amps\" : 3.1,\n\"volts\" : 241.0,\n\"reading\" : 38009130,\n\"state\" : \"Ok\"\n}\n
Then here are some example expressions and the results they would produce given the original datum example:
Expression Result CommenthasOffset(1)
true
Returns true
because of the earlier datum that is available. hasOffset(2)
false
Returns false
because only one earlier datum is available. amps - offset(1).amps
1.1
Computes the difference between the current and previous amps
properties, which is 4.2 - 3.1 = 1.1
."},{"location":"users/expressions/#other-datum-stream-history","title":"Other datum stream history","text":"Other datum stream histories collected by SolarNode can also be accessed via the offset(source,offset)
function. Let's assume SolarNode is collecting a datum stream for the source ID solar
, and had amassed the following history, in newest-to-oldest order:
[\n{\"amps\" : 6.0, \"volts\" : 240.0 },\n{\"amps\" : 5.9, \"volts\" : 239.9 }\n]\n
Then here are some example expressions and the results they would produce given the original datum example:
Expression Result CommenthasLatest('solar')
true
Returns true
because of a datum for source solar
is available. hasOffset('solar',2)
false
Returns false
because only one earlier datum from the latest with source solar
is available. (amps * volts) - (latest('solar').amps * latest('solar').volts)
432.0
Computes the difference in energy between the latest solar
datum and the current datum, which is (6.0 \u00d7 240.0) - (4.2 \u00d7 240.0) = 432.0
. If we add another datum stream for the source ID solar1
like this:
[\n{\"amps\" : 1.0, \"volts\" : 240.0 }\n]\n
If we also add another datum stream for the source ID solar2
like this:
[\n{\"amps\" : 3.0, \"volts\" : 240.0 }\n]\n
Then here are some example expressions and the results they would produce given the previous datum examples:
Expression Result Commentsum(latestMatching('solar*').?[amps>1].![amps * volts])
2160
Returns the sum power of the latest solar
and solar2
datum. The solar1
power is omitted because its amps
property is not greater than 1
, so we end up with (6 * 240) + (3 * 240) = 2160
."},{"location":"users/expressions/#datum-metadata","title":"Datum metadata","text":"Some functions return DatumMetadataOperations
objects. These objects provide metadata for things like a specific source ID on SolarNode.
The properties available on datum metadata objects are:
Property Type Descriptionempty
boolean
Is true
if the metadata does not contain any values. info
Map<String,Object>
Simple map based access to the general metadata (e.g. the keys of the m
metadata map). infoKeys
Set<String>
The set of general metadata keys available (e.g. the keys of the m
metadata map). propertyInfoKeys
Set<String>
The set of property metadata keys available (e.g. the keys of the pm
metadata map). tags
Set<String>
A set of tags associated with the metadata."},{"location":"users/expressions/#datum-metadata-general-info-functions","title":"Datum metadata general info functions","text":"The following functions available on datum metadata objects support access to the general metadata (e.g. the m
metadata map):
getInfo(key)
String
Object
Get the general metadata value for a specific key. getInfoNumber(key)
String
Number
Get a general metadata value for a specific key as a Number
. Other more specific number value functions are also available such as getInfoInteger(key)
or getInfoBigDecimal(key)
. getInfoString(key)
String
String
Get a general metadata value for a specific key as a String
. hasInfo(key)
String
boolean
Returns true
if a non-null general metadata value exists for the given key."},{"location":"users/expressions/#datum-metadata-property-info-functions","title":"Datum metadata property info functions","text":"The following functions available on datum metadata objects support access to the property metadata (e.g. the pm
metadata map):
getPropertyInfo(prop)
String
Map<String,Object>
Get the property metadata for a specific property. getInfoNumber(prop,key)
String
, String
Number
Get a property metadata value for a specific property and key as a Number
. Other more specific number value functions are also available such as getInfoInteger(prop,key)
or getInfoBigDecimal(prop,key)
. getInfoString(prop,key)
String
, String
String
Get a property metadata value for a specific property and key as a String
. hasInfo(prop,key)
String
, String
String
Returns true
if a non-null property metadata value exists for the given property and key."},{"location":"users/expressions/#datum-metadata-global-functions","title":"Datum metadata global functions","text":"The following functions available on datum metadata objects support access to both general and property metadata:
Function Arguments Result DescriptiondiffersFrom(metadata)
DatumMetadataOperations
boolean
Returns true
if the given metadata has any different values than the receiver. hasTag(tag)
String
boolean
Returns true
if the given tag is available. metadataAtPath(path)
String
Object
Get the metadata value at a metadata key path. hasMetadataAtPath(path)
String
boolean
Returns true
if metadataAtPath(path)
would return a non-null value."},{"location":"users/getting-started/","title":"Getting Started","text":"This section describes how to get SolarNode running on a device. You will need to configure your device as a SolarNode and associate your SolarNode with SolarNetwork.
Tip
You might find it helpful to read through this entire guide before jumping in. There are screen shots and tips provided to help you along the way.
"},{"location":"users/getting-started/#get-your-device-ready-to-use","title":"Get your device ready to use","text":"SolarNode can run on a variety of devices. To get started using SolarNode, you must download the appropriate SolarNodeOS image for your device. SolarNodeOS is a complete operating system tailor made for SolarNode. Choose the SolarNodeOS image for the device you want to run SolarNode on and then copy that image to your device media (typically an SD card).
"},{"location":"users/getting-started/#choose-your-device","title":"Choose your device","text":"Raspberry PiOrange PiSomething ElseThe Raspberry Pi is the best supported option for general SolarNode deployments. Models 3 or later, Compute Module 3 or later, and Zero 2 W or later are supported. Use a tool like Etcher or Raspberry Pi Imager to copy the image to an SD card (minimum size is 2 GB, 4 GB recommended).
Download SolarNodeOS for Raspberry Pi
The Orange Pi models Zero and Zero Plus are supported. Use a tool like Etcher to copy the image to an SD card (minimum size is 1 GB, 4 GB recommended).
Download SolarNodeOS for Orange Pi
Looking for SolarNodeOS for a device not listed here? Reach out to us through email or Slack to see if we can help!
"},{"location":"users/getting-started/#configure-your-network","title":"Configure your network","text":"SolarNode needs a network connection. If your device has an ethernet port, that is the most reliable way to get started: just plug in your ethernet cable and off you go!
If you want to use WiFi, or would like more detailed information about SolarNode's networking options, see the Networking sections.
"},{"location":"users/getting-started/#power-it-on","title":"Power it on","text":"Insert your SD card (or other device media) into your device, and power it on. While it starts up, proceed with the next steps.
"},{"location":"users/getting-started/#associate-your-solarnode-with-solarnetwork","title":"Associate your SolarNode with SolarNetwork","text":"Every SolarNode must be associated (registered) with a SolarNetwork account. To associate a SolarNode, you must:
If you do not already have a SolarNetwork account, register for one and then log in.
"},{"location":"users/getting-started/#generate-a-solarnode-invitation","title":"Generate a SolarNode invitation","text":"Click on the My Nodes link. You will see an Invite New SolarNode button, like this:
Click the Invite New SolarNode button, then fill in and submit the form that appears and select your time zone by clicking on the world map:
The generated SolarNode invitation will appear next.
Select and copy the entire invitation. You will need to paste that into the SolarNode setup screen in the next section.
"},{"location":"users/getting-started/#accept-the-invitation-on-solarnode","title":"Accept the invitation on SolarNode","text":"Open the SolarNode Setup app in your browser. The URL to use might be http://solarnode/ or it might be an IP address like http://192.168.1.123
. See the Networking section for more information. You will be greeted with an invitation acceptance form into which you can paste the invitation you generated in SolarNetwork. The acceptance process goes through the following steps:
First you submit the invitation in the acceptance form.
Next you preview the invitation details.
Note
The expected SolarNetwork Service value shown in this step will be in.solarnetwork.net
.
Finally, confirm the invitation. This step contacts SolarNetwork and completes the association process.
Warning
Ensure you provide a Certificate Password on this step, so SolarNetwork can generate a security certificate for your SolarNode.
When these steps are completed, SolarNetwork will have assigned your SolarNode a unique identifier known as your Node ID. A randomly generated SolarNode login password will have been generated; you are given the opportunity to easily change that if you prefer.
"},{"location":"users/getting-started/#next-steps","title":"Next steps","text":"Learn more about the SolarNode Setup app.
"},{"location":"users/logging/","title":"Logging","text":"Logging in SolarNode is configured in the /etc/solarnode/log4j2.xml
file, which is in the log4j configuration format. The default configuration in SolarNodeOS sets the overall verbosity to INFO
and logs to a temporary storage area /run/solarnode/log/solarnode.log
.
Log messages have the following general properties:
Component Example Description Timestamp2022-03-15 09:05:37,029
The date/time the message was generated. Note the format of the timestamp depends on the logging configuration; the SolarNode default is shown in this example. Level INFO
The severity/verbosity of the message (as determined by the developer). This is an enumeration, and from least-to-most severe: TRACE
, DEBUG
, INFO
, WARN
, ERROR
. The level of a given logger allows messages with that level or higher to be logged, while lower levels are skipped. The default SolarNode configuration sets the overal level to INFO
, so TRACE
and DEBUG
messages are not logged. Logger ModbusDatumDataSource
A category or namespace associated with the message. Most commonly these equate to Java class names, but can be any value and is determined by the developer. Periods in the logger name act as a delimiter, forming a hierarchy that can be tuned to log at different levels. For example, given the default INFO
level, configuring the net.solarnetwork.node.io.modbus
logger to DEBUG
would turn on debug-level logging for all loggers in the Modbus IO namespace. Note that the default SolarNode configuration logs just a fixed number of the last characters of the logger name. This can be changed in the configuration to log more (or all) of the name, as desired. Message Error reading from device.
The message itself, determined by the developer. Exception Some messages include an exception stack trace, which shows the runtime call tree where the exception occurred."},{"location":"users/logging/#logger-namespaces","title":"Logger namespaces","text":"The Logger component outlined in the previous section allows a lot of flexibility to configure what gets logged in SolarNode. Setting the level on a given namespace impacts that namespace as well as all namespaces beneath it, meaning all other loggers that share the same namespace prefix.
For example, imagine the following two loggers exist in SolarNode:
net.solarnetwork.node.io.modbus.serial.SerialModbusNetwork
net.solarnetwork.node.io.modbus.util.ModbusUtils
Given the default configuration sets the default level to INFO
, we can turn in DEBUG
logging for both of these by adding a <Logger>
line like the following within the <Loggers>
element:
<Logger name=\"net.solarnetwork.node.io.modbus\" level=\"debug\"/>\n
That turns on DEBUG
for both loggers because they are both children of the net.solarnetwork.node.io.modbus
namespace. We could turn on TRACE
logging for one of them like this:
<Logger name=\"net.solarnetwork.node.io.modbus\" level=\"debug\"/>\n<Logger name=\"net.solarnetwork.node.io.modbus.serial\" level=\"trace\"/>\n
That would also turn on TRACE
for any other loggers in the net.solarnetwork.node.io.modbus.serial
namespace. You can limit the configuration all the way down to a full logger name if you like, for example:
<Logger name=\"net.solarnetwork.node.io.modbus\" level=\"debug\"/>\n<Logger name=\"net.solarnetwork.node.io.modbus.serial.SerialModbusNetwork\" level=\"trace\"/>\n
"},{"location":"users/logging/#logging-ui","title":"Logging UI","text":"The SolarNode UI supports configuring logger levels dynamically, without having to change the logging configuration file. See the Setup App / Settings / Logging page for more information.
"},{"location":"users/logging/#storage-constraints","title":"Storage constraints","text":"The default SolarNode configuration automatically rotates log files based on size, and limits the number of historic log files kept around, to that its associated storage space is not filled up. When a log file reaches the file limit, it is renamed to include a -i.log
suffix, where i
is an offset from the current log. The default configuration sets the maximum log size to 1 MB and limits the number of historic files to 3.
You can also adjust how much history is saved by tweaking the <SizeBasedTriggeringPolicy>
and <DefaultRolloverStrategy>
configuration. For example to change to a limit of 9 historic files of at most 5 MB each, the configuration would look like this:
<Policies>\n<SizeBasedTriggeringPolicy size=\"5 MB\"/>\n</Policies>\n<DefaultRolloverStrategy max=\"9\"/>\n
"},{"location":"users/logging/#persistent-logging","title":"Persistent logging","text":"By default SolarNode logs to temporary (RAM) storage that is discarded when the node reboots. The configuration can be changed so that logs are written directly to persistent storage if you would like to have the logs persisted across reboots, or would like to preserve more log history than can be stored in the temporary storage area.
To make this change, update the <RollingFile>
element's fileName
and/or filePattern
attributes to point to a persistent filesystem. SolarNode already has write permission to the /var/lib/solarnode/var
directory, so an easy location to use is /var/lib/solarnode/var/log
, like this:
<RollingFile name=\"File\"\nimmediateFlush=\"false\"\nfileName=\"/var/lib/solarnode/var/log/solarnode.log\"\nfilePattern=\"/var/lib/solarnode/var/log/solarnode-%i.log\">\n
Warning
This configuration can add a lot of stress to the node's storage medium, and may shorten its useful life. Consumer-grade SD cards in particular can fail quickly if SolarNode is writting a lot of information, such as verbose logging. Use of this configuration should be used with caution.
"},{"location":"users/logging/#logging-example-split-across-multiple-files","title":"Logging example: split across multiple files","text":"Sometimes it can be useful to turn on verbose logging for some area of SolarNode, but have those messages go to a different file so they don't clog up the main solarnode.log
file. This can be done by configuring additional appender configurations.
The following example logging configuration creates the following log files:
/var/log/solarnode/solarnode.log
- the main log/var/log/solarnode/filter.log
- filter logging/var/log/solarnode/mqtt-solarin.log
- MQTT wire logging to SolarIn/var/log/solarnode/mqtt-solarflux.log
- MQTT wire logging to SolarFluxFirst you must create the /var/log/solarnode
directory and give SolarNode permission to write there:
sudo mkdir /var/log/solarnode\nsudo chgrp solar /var/log/solarnode\nsudo chmod g+w /var/log/solarnode\n
Then edit the /etc/solarnode/log4j2.xml
file to hold the following (adjust according to your needs):
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Configuration status=\"WARN\">\n<Appenders>\n<RollingFile name=\"File\"\nimmediateFlush=\"true\"\nfileName=\"/var/log/solarnode/solarnode.log\"\nfilePattern=\"/var/log/solarnode/solarnode-%i.log\"><!-- (1)! -->\n<PatternLayout pattern=\"%d{DEFAULT} %-5p %40.40c; %msg%n\"/>\n<Policies>\n<SizeBasedTriggeringPolicy size=\"5 MB\"/>\n</Policies>\n<DefaultRolloverStrategy max=\"9\"/>\n</RollingFile>\n<RollingFile name=\"Filter\"\nimmediateFlush=\"false\"\nfileName=\"/var/log/solarnode/filter.log\"\nfilePattern=\"/var/log/solarnode/filter-%i.log\"><!-- (2)! -->\n<PatternLayout pattern=\"%d{DEFAULT} %-5p %40.40c; %msg%n\"/>\n<Policies>\n<SizeBasedTriggeringPolicy size=\"10 MB\"/>\n</Policies>\n<DefaultRolloverStrategy max=\"9\"/>\n</RollingFile>\n<RollingFile name=\"MQTT\"\nimmediateFlush=\"false\"\nfileName=\"/var/log/solarnode/mqtt.log\"\nfilePattern=\"/var/log/solarnode/mqtt-%i.log\"><!-- (3)! -->\n<PatternLayout pattern=\"%d{DEFAULT} %-5p %40.40c; %msg%n\"/>\n<Policies>\n<SizeBasedTriggeringPolicy size=\"10 MB\"/>\n</Policies>\n<DefaultRolloverStrategy max=\"9\"/>\n</RollingFile>\n<RollingFile name=\"Flux\"\nimmediateFlush=\"false\"\nfileName=\"/var/log/solarnode/flux.log\"\nfilePattern=\"/var/log/solarnode/flux-%i.log\"><!-- (4)! -->\n<PatternLayout pattern=\"%d{DEFAULT} %-5p %40.40c; %msg%n\"/>\n<Policies>\n<SizeBasedTriggeringPolicy size=\"10 MB\"/>\n</Policies>\n<DefaultRolloverStrategy max=\"9\"/>\n</RollingFile>\n</Appenders>\n<Loggers>\n<Logger name=\"org.eclipse.gemini.blueprint.blueprint.container.support\" level=\"warn\"/>\n<Logger name=\"org.eclipse.gemini.blueprint.context.support\" level=\"warn\"/>\n<Logger name=\"org.eclipse.gemini.blueprint.service.importer.support\" level=\"warn\"/>\n<Logger name=\"org.springframework.beans.factory\" level=\"warn\"/>\n\n<Logger name=\"net.solarnetwork.node.datum.filter\" level=\"trace\" additivity=\"false\">\n<AppenderRef ref=\"Filter\"/><!-- (5)! -->\n</Logger>\n\n<Logger name=\"net.solarnetwork.mqtt.queue\" level=\"trace\" additivity=\"false\">\n<AppenderRef ref=\"MQTT\"/>\n</Logger>\n\n<Logger name=\"net.solarnetwork.mqtt.influx\" level=\"trace\" additivity=\"false\">\n<AppenderRef ref=\"Flux\"/>\n</Logger>\n\n<Root level=\"info\">\n<AppenderRef ref=\"File\"/><!-- (6)! -->\n</Root>\n</Loggers>\n</Configuration>\n
File
appender is the \"main\" application log where most logs should go.Filter
appender is where we want net.solarnetwork.node.datum.filter
messages to go.MQTT
appender is where we want net.solarnetwork.mqtt.queue
messages to go.Flux
appender is where we want net.solarnetwork.mqtt.influx
messages to go.additivity=\"false\"
and add the <AppenderRef>
element that refereneces the specific appender name we want the log messages to go to. The additivity=false
attribute means the log messages will only go to the Filter
appender, instead of also going to the root-level File
appender.Filter
, MQTT
, and Flux
appenders above.The various <AppenderRef>
elements configure the appender name to write the messages to.
The various additivity=\"false\"
attributes disable appender additivity which means the log message will only be written to one appender, instead of being written to all configured appenders in the hierarchy (for example the root-level appender).
The immediateFlush=\"false\"
turns on buffered logging, which means log messages are buffered in RAM before being flushed to disk. This is more forgiving to the disk, at the expense of a delay before the messages appear.
MQTT wire logging means the raw MQTT packets send and received over MQTT connections will be logged in an easy-to-read but very verbose format. For the MQTT wire logging to be enabled, it must be activated with a special configuration file. Create the /etc/solarnode/services/net.solarnetwork.common.mqtt.netty.cfg
file with this content:
wireLogging = true\n
"},{"location":"users/logging/#mqtt-wire-log-namespace","title":"MQTT wire log namespace","text":"MQTT wire logs use a namespace prefix net.solarnetwork.mqtt.
followed by the connection's host name or IP address and port. For example SolarIn messages would use net.solarnetwork.mqtt.queue.solarnetwork.net:8883
and SolarFlux messages would use net.solarnetwork.mqtt.influx.solarnetwork.net:8884
.
SolarNode will attempt to automatically configure networking access from a local DHCP server. For many deployments the local network router is the DHCP server. SolarNode will identify itself with the name solarnode
, so in many cases you can reach the SolarNode setup app at http://solarnode/.
To find what network address SolarNode is using, you have a few options:
"},{"location":"users/networking/#consult-your-network-router","title":"Consult your network router","text":"Your local network router is very likely to have a record of SolarNode's network connection. Log into the router's management UI and look for a device named solarnode
.
If your SolarNode supports connecting a keyboard and screen, you can log into the SolarNode command line console and run ip -br addr
to print out a brief summary of the current networking configuration:
$ ip -br addr\n\nlo UNKNOWN 127.0.0.1/8 ::1/128\neth0 UP 192.168.0.254/24 fe80::e65f:1ff:fed1:893c/64\nwlan0 DOWN\n
In the previous output, SolarNode has an ethernet device eth0
with a network address 192.168.0.254
and a WiFi device wlan0
that is not connected. You could reach that SolarNode at http://192.168.0.254/
.
Tip
You can get more details by running ip addr
(without the -br
argument).
If your device will use WiFi for network access, you will need to configure the network name and credentials to use. You can do that by creating a wpa_supplicant.conf
file on the SolarNodeOS media (typically an SD card). For Raspberry Pi media, you can mount the SD card on your computer and it will mount the appropriate drive for you.
Once mounted use your favorite text editor to create a wpa_supplicant.conf
file with content like this:
country=nz\nnetwork={\n ssid=\"wifi network name here\"\n psk=\"wifi password here\"\n}\n
Change the country=nz
to match your own country code.
SolarNode supports a concept called operational modes. Modes are simple names like quiet
and hyper
that can be either active or inactive. Any number of modes can be active at a given time. In theory both quiet
and hyper
could be active simultaneously. Modes can be named anything you like.
Modes can be used by SolarNode components to alter their behavior dynamically. For example a data source component might stop collecting data from a set of data sources if the quiet
mode is active, or start collecting data at an increased frequency if hyper
is active. Some components might require specific names, which are described in their documentation. Components that allow configuring a required operational mode setting can also invert the requirement by adding a !
prefix to the mode name, for example !hyper
can be thought of as \"when hyper
is not active\". You can also specify exactly !
to match only when no mode is active.
Datum Filters also make use of operational modes, to toggle filters on and off dynamically.
"},{"location":"users/op-modes/#automatic-expiration","title":"Automatic expiration","text":"Operational modes can be activated with an associated expiration date. The mode will remain active until the expiration date, at which time it will be automatically deactivated. A mode can always be manually deactivated before its associated expiration date.
"},{"location":"users/op-modes/#operational-modes-management-api","title":"Operational Modes management API","text":"The SolarUser Instruction API can be used to toggle operational modes on and off. The EnableOperationalModes
instruction activates modes and DisableOperationalModes
deactivates them.
SolarNode supports placeholders in some setting values, such as datum data source IDs. These allow you to define a set of parameters that can be consistently applied to many settings.
For example, imagine you manage many SolarNode devices across different buildings or sites. You'd like to follow a naming convention for your datum data source ID values that include a code for the building the node is deployed in, along the lines of /BUILDING/DEVICE
. You could define a placeholder building
and then configure the source IDs like /{building}/device
. On each node you'd define the building
placeholder with a building-specific value, so at runtime the nodes would resolve actual source ID values with those names replacing the {building}
placeholder, for example /OFFICE1/meter
.
Placeholders are written using the form {name:default}
where name
is the placeholder name and default
is an optional default value to apply if no placeholder value exists for the given name. If a default value is not needed, omit the colon
so the placeholder becomes just {name}
.
For example, imagine a set of placeholder values like
Name Value building OFFICE1 room BREAKHere are some example settings with placeholders with what they would resolve to:
Input Resolved value/{building}/meter
/OFFICE1/meter
/{building}/{room}/temp
/OFFICE1/BREAK/temp
/{building}/{floor:1}/{room}
/OFFICE1/1/BREAK
"},{"location":"users/placeholders/#static-placeholder-configuration","title":"Static placeholder configuration","text":"SolarNode will look for placeholder values defined in properties files stored in the conf/placeholders.d
directory by default. In SolarNodeOS this is the /etc/solarnode/placeholders.d
directory.
Warning
These files are only loaded once, when SolarNode starts up. If you make changes to any of them then SolarNode must be restarted.
The properties file names must have a .properties
extension and follow Java properties file syntax. Put simply, each file contains lines like
name = value\n
where name
is the placeholder name and value
is its associated value. The example set of placeholder values shown previously could be defined in a /etc/solarnode/placeholders.d/mynode.properties
file with this content:
building = OFFICE1\nroom = BREAK\n
"},{"location":"users/placeholders/#dynamic-placeholder-configuration","title":"Dynamic placeholder configuration","text":"SolarNode also supports storing placeholder values as Settings using the key placeholder
. The SolarUser /instruction/add API can be used with the UpdateSetting topic to modify the placeholder values as needed. The type
value is the placeholder name and the value
the placeholder value. Placeholders defined this way have priority over any similarly-named placeholders defined statically. Changes take effect as soon as SolarNode receives and processes the instruction.
Warning
Once a placeholder value is set via the UpdateSetting
instruction, the same value defined as a static placeholder will be overridden and changes to the static value will be ignored.
For example, to set the floor
placeholder to 2
on node 123, you could make a POST
request to /solaruser/api/v1/sec/instr/add/UpdateSetting
with the following JSON body:
{\n\"nodeId\": 123,\n\"params\":{\n\"key\": \"placeholder\",\n\"type\": \"floor\",\n\"value\": \"2\"\n}\n}\n
Multiple settings can be updated as well, using a different syntax. Here's a request that sets both floor
to 2
and room
to MEET
:
{\"nodeId\":123,\"parameters\":[\n{\"name\":\"key\", \"value\":\"placeholder\"},\n{\"name\":\"type\", \"value\":\"floor\"},\n{\"name\":\"value\", \"value\":\"2\"},\n{\"name\":\"key\", \"value\":\"placeholder\"},\n{\"name\":\"type\", \"value\":\"room\"},\n{\"name\":\"value\", \"value\":\"MEET\"}\n]}\n
"},{"location":"users/remote-access/","title":"Remote Access","text":"SolarSSH is SolarNetwork's method of connecting to SolarNode devices over the internet even when those devices are not directly reachable due to network firewalls or routing rules. It uses the Secure Shell Protocol (SSH) to ensure your connection is private and secure.
SolarSSH does not maintain permanently open SSH connections to SolarNode devices. Instead the connections are established on demand, when you need them. This allows you to connect to a SolarNode when you need to perform maintenance, but not require SolarNode maintain an open SSH connection to SolarSSH.
In order to use SolarSSH, you will need a User Security Token to use for authentication.
"},{"location":"users/remote-access/#browser-connection","title":"Browser Connection","text":"You can use SolarSSH right in your browser to connect to any of your nodes.
The SolarSSH browser app
"},{"location":"users/remote-access/#choose-your-node-id","title":"Choose your node ID","text":"Click on the node ID in the page title to change what node you want to connect to.
Changing the SolarSSH node ID
Bookmark a SolarSSH page for your node ID
You can append a ?nodeId=X
to the SolarSSH browser URL https://go.solarnetwork.net/solarssh/, where X
is a node ID, to make the app start with that node ID directly. For example to start with node 123, you could bookmark the URL https://go.solarnetwork.net/solarssh/?nodeId=123.
Fill in User Security Token credentials for authentication. The node ID you are connecting to must be owned by the same account as the security token.
"},{"location":"users/remote-access/#connect","title":"Connect","text":"Click the Connect button to initiate the SolarSSH connection process. You will be presented with a dialog form to provide your SolarNodeOS system account credentials. This is only necessary if you want to connect to the SolarNodeOS system command line. If you only need to access the SolarNode Setup App, you can click the Skip button to skip this step. Otherwise, click the Login button to log into the system command line.
SolarNodeOS system account credentials form
SolarSSH will then establish the connection to your node. If you provided SolarNodeOS system account credentials previously and clicked the Login button, you will end up with a system command prompt, like this:
SolarSSH logged-in system command prompt
"},{"location":"users/remote-access/#remote-setup-app","title":"Remote Setup App","text":"Once connected, you can access the remote node's Setup App by clicking the Setup button in the top-right corner of the window. This will open a new browser tab for the Setup App.
Accessing the SolarNode Setup App through a SolarSSH web connection
"},{"location":"users/remote-access/#direct-connection","title":"Direct connection","text":"SolarSSH also supports a \"direct\" connection mode, that allows you to connect using standard ssh client applications. This is a more advanced (and flexible) way of connecting to your nodes, and even allows you to access other network services on the same network as the node and provides full SSH integration including port forwarding, scp
, and sftp
support.
Direct SolarSSH connections require using a SSH client that supports the SSH \"jump\" host feature. The \"jump\" server hosted by SolarNetwork Foundation is available at ssh.solarnetwork.net:9022
.
The \"jump\" connection user is formed by combining a node ID with a user security token, separated by a :
character. The general form of a SolarSSH direct connection \"jump\" host thus looks like this:
NODE:TOKEN@ssh.solarnetwork.net:9022\n
where NODE
is a SolarNode ID and TOKEN
is a SolarNetwork user security token.
The actual SolarNode user can be any OS user (typically solar
) and the hostname can be anything. A good practice for the hostname is to use one derived from the SolarNode ID, e.g. solarnode-123
.
Using OpenSSH a complete connection command to log in as a solar
user looks like this, passing the \"jump\" host via a -J
argument:
ssh -J 'NODE:TOKEN@ssh.solarnetwork.net:9022' solar@solarnode-NODE\n
Warning
SolarNetwork security tokens often contain characters that must be escaped with a \\
character for your shell to interpret them correctly. For example, a token like 9gPa9S;Ux1X3kK)YN6&g
might need to have the ;)&
characters escaped like 9gPa9S\\;Ux1X3kK\\)YN6\\&g
.
You will be first prompted to enter a password, which must be the token secret. You might then be prompted for the SolarNode OS user's password. Here's an example screen shot:
Accessing the SolarNode system command line through a SolarSSH direct connection
"},{"location":"users/remote-access/#shell-shortcut-function","title":"Shell shortcut function","text":"If you find yourself using SolarSSH connections frequently, a handy bash
or zsh
shell function can help make the connection process easier to remember. Here's an example that give you a solarssh
command that accepts a SolarNode ID argument, followed by any optional SSH arguments:
function solarssh () {\nlocal node_id=\"$1\"\nif [ -z \"$node_id\" ]; then\necho 'Must provide node ID , e.g. 123'\nelse\nshift\necho \"Enter SN token secret when first prompted for password. Enter node $node_id password second.\"\nssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \\\n-o LogLevel=ERROR -o NumberOfPasswordPrompts=1 \\\n-J \"$node_id\"'SN_TOKEN_HERE@ssh.solarnetwork.net:9022' \\\n$@ solar@solarnode-$node_id\nfi\n}\n
Just replace SN_TOKEN_HERE
with a user security token. After integrating this into your shell's configuration (e.g. ~/.bashrc
or ~/.zshrc
) then you could connect to node 123
like:
solarssh 123\n
"},{"location":"users/remote-access/#putty","title":"PuTTY","text":"PuTTY is a popular tool for Windows that supports SolarSSH connections. To connect to a SolarNode using PuTTY, you must:
ssh.solarnetwork.net:9022
using a username like NODE_ID:TOKEN_ID
and the corresponding token secret as the password.localhost:8080
to access the SolarNode Setup Appsolarnode-NODE_ID
on port 22
Open the Connection > Proxy configuration category in PuTTY, and configure the following settings:
Setting Value Proxy type SSH to proxy and use port forwarding Proxy hostnamessh.solarnetwork.net
Port 9022
Username The desired node ID, followed by a :
, followed by a user security token ID, that is: NODE_ID:TOKEN_ID
Password The user security token secret.
Confiruing PuTTY connection proxy settings
"},{"location":"users/remote-access/#putty-ssh-tunnel-configuration","title":"PuTTY SSH tunnel configuration","text":"To access the SolarNode Setup App, you can configure PuTTY to foward a port on your local machine to localhost:8080
on the node. Once the SSH connection is established, you can open a browser to http://localhost:PORT
to access the SolarNode Setup App. You can use any available local port, for example if you used port 8888
then you would open a browser to http://localhost:8888
to access the SolarNode Setup App.
Open the Connection > SSH > Tunnels configuration category in PuTTY, and configure the following settings:
Setting Value Source port A free port on your machine, for example8888
. Destination localhost:8080
Add You must click the Add button to add this tunnel. You can then add other tunnels as needed.
Confiruing PuTTY connection SSH tunnel settings
"},{"location":"users/remote-access/#putty-session-configuration","title":"PuTTY session configuration","text":"Finally under the Session configuration category in PuTTY, configure the Host Name and Port to connect to SolarNode. You can also provide a session name and click the Save button to save all the settings you have configured, making it easy to load them in the future.
Setting Value Host Name Does not actually matter, but a name likesolarnode-NODE_ID
is helpful, where NODE_ID
is the ID of the node you are connecting to. Port 22
Confiruing PuTTY session settings
"},{"location":"users/remote-access/#putty-open-connection","title":"PuTTY open connection","text":"On the Session configuration category, click the Open button to establish the SolarSSH connection. You might be prompted to confirm the identity of the ssh.solarnetwork.net
server first. Click the Accept button if this is the case.
PuTTY host verification alert
PuTTY will connect to SolarSSH and after a short while prompt you for the SolarNodeOS user you would like to connect to SolarNode with. Typically you would use the solar
account, so you would type solar
followed by Enter. You will then be prompted for that account's password, so type that in and type Enter again. You will then be presented with the SolarNodeOS shell prompt.
PuTTY node login
Assuming you configured a SSH tunnel on port 8888
to localhost:8080
, you can now open http://localhost:8888 to access the SolarNode Setup App.
Once connected to SolarSSH, access the SolarNode Setup App in your browser.
"},{"location":"users/security-tokens/","title":"Security Tokens","text":"Some SolarNode features require SolarNetwork Security Tokens to use as authentication credentails for SolarNetwork services. Security Tokens are managed on the Security Tokens page in SolarNetwork.
The Security Tokens page in SolarNetwork
"},{"location":"users/security-tokens/#user-tokens","title":"User Tokens","text":"User Security Tokens allow access to web services that perform functions directly on your behalf, for example issue an instruction to your SolarNode.
Click the \"+\" button in the User Tokens section to generate a new security token. You will be shown a form where you can give a name, description, and policy restrictions for the token.
The form for creating a new User Security Token
Click the Generate Security Token button to generate the new token. You will then be shown the generated token. You will need to copy and save the token to a safe and secure place.
A newly generated security token \u2014 make sure to save the token in a safe place
"},{"location":"users/security-tokens/#data-tokens","title":"Data Tokens","text":"Data Security Tokens allow access to web services that query the data collected by your SolarNodes.
Click the \"+\" button in the Data Tokens section to generate a new security token. You will be shown a form where you can give a name, description, and policy restrictions for the token.
The form for creating a new Data Security Token
Click the Generate Security Token button to generate the new token. You will then be shown the generated token. You will need to copy and save the token to a safe and secure place.
"},{"location":"users/security-tokens/#security-policy","title":"Security Policy","text":"Security tokens can be configured with a Security Policy that restricts the types of functions or data the token has permission to access.
Policy User Node Description API Paths Restrict the token to specific API methods. Expiry Make the token invalid after a specific date. Minimum Aggregation Restrict the data aggregation level allowed. Node IDs Restrict to specific node IDs. Refresh Allowed Make the token invalid after a specific date. Source IDs Restrict to specific datum source IDs. Node Metadata Restrict to specific node metadata. User Metadata Restrict to specific user metadata."},{"location":"users/security-tokens/#api-paths","title":"API Paths","text":"The API Paths policy restricts the token to specific SolarNet API methods, based on their URL path. If this policy is not included then all API methods are allowed.
"},{"location":"users/security-tokens/#expiry","title":"Expiry","text":"The Expiry policy makes the token invalid after a specific date. If this policy is not included, the token does not ever expire.
"},{"location":"users/security-tokens/#minimum-aggregation","title":"Minimum Aggregation","text":"The Minimum Aggregation policy restricts the token to a minimum data aggregation level. If this policy is not included, or of the minimum level is set to None, data for any aggregation level is allowed.
"},{"location":"users/security-tokens/#node-ids","title":"Node IDs","text":"The Node IDspolicy restrict the token to specific node IDs. If this policy is not included, then the token has access to all node IDs in your SolarNetwork account.
"},{"location":"users/security-tokens/#node-metadata","title":"Node Metadata","text":"The Node Metadata policy restricts the token to specific portions of node-level metadata. If this policy is not included then all node metadata is allowed.
"},{"location":"users/security-tokens/#refresh-allowed","title":"Refresh Allowed","text":"The Refresh Allowed policy grants applications given a signing key rather than the token's private password can refresh the key as long as the token has not expired.
"},{"location":"users/security-tokens/#source-ids","title":"Source IDs","text":"The Source IDs policy restrict the token to specific datum source IDs. If this policy is not included, then the token has access to all source IDs in your SolarNetwork account.
"},{"location":"users/security-tokens/#user-metadata","title":"User Metadata","text":"The User Metadata policy restricts the token to specific portions of account-level metadata. If this policy is not included then all user metadata is allowed.
"},{"location":"users/settings/","title":"Settings Files","text":"SolarNode plugins support configurable properties, called settings. The SolarNode setup app allows you to manage settings through simple web forms.
Settings can also be exported and imported in a CSV format, and can be applied when SolarNode starts up with Auto Settings CSV files. Here is an example of a settings form in the SolarNode setup app:
There are 3 settings represented in that screen shot:
Tip
Nearly every form field you can edit in the SolarNode setup app represents a setting for a component in SolarNode.
In the SolarNode setup app the settings can be imported and exported from the Settings > Backups screen in the Settings Backup & Restore section:
"},{"location":"users/settings/#settings-csv-example","title":"Settings CSV example","text":"Here's an example snippet of a settings CSV file:
net.solarnetwork.node.io.modbus.1,serialParams.baudRate,19200,0,2014-03-01 21:01:31\nnet.solarnetwork.node.io.modbus.1,serialParams.parityString,even,0,2014-03-01 21:01:31\nnet.solarnetwork.node.io.modbus.1,serialParams.portName,/dev/cuaU0,0,2014-03-01 21:01:31\nnet.solarnetwork.node.io.modbus.FACTORY,1,1,0,2014-03-01 21:00:31\n
These settings all belong to a net.solarnetwork.node.io.modbus
component. The meaning of the CSV columns is discussed in the following section.
Settings files are CSV (comma separated values) files, easily exported from spreadsheet applications like Microsoft Excel or Google Sheets. The CSV must include a header row, which is skipped. All other rows will be processed as settings.
The Settings CSV format uses a quite general format and contains the following columns:
# Name Description 1 key A unique identifier for the service the setting applies to. 2 type A unique identifier for the setting with the service specified bykey
, typically using standard property syntax. 3 value The setting value. 4 flags An integer bitmask of flags associated with the setting. See the flags section for more info. 5 modified The date the setting was last modified, in yyyy-MM-dd HH:mm:ss
format. To understand the key
and type
values required for a given component requires consulting the documentation of the plugin that provides that component. You can get a pretty good picture of what the values are by exporting the settings after configuring a component in SolarNode. Typically the key
value will mirror a plugin's Java package name, and type
follows a JavaScript-like property accessor syntax representing a configurable property on the component.
The type
setting value usually defines a component property using a JavaScript-like syntax with these rules:
name
a property named name
Nested property name.subname
a nested property subname
on a parent property name
List property name[0]
the first element of an indexed list property named name
Map property name['key']
the key
element of the map property name
These rules can be combined into complex expressions, for example propIncludes[0].name
or delegate.connectionFactory.propertyFilters['UID']
.
Each setting has a set of flags that can be associated with it. The following table outlines the bit offset for each flag along with a description:
# Name Description 0 Ignore modification date If this flag is set then changes to the associated setting will not trigger a new auto backup. 1 Volatile If this flag is set then changes to the associated setting will not trigger an internal \"setting changed\" event to be broadcast.Note these are bit offsets, so the decimal value to ignore modification date is 1
, to mark as volatile is 2
, and for both is 3
.
Many plugins provide component factories which allow you to configure any number of instances of that component. Each component instance is assigned a unique identifier when it is created. In the SolarNode setup app, the component instance identifiers appear throughout the UI:
In the previous example CSV the Modbus I/O plugin allows you to configure any number of Modbus connection components, each with their own specific settings. That is an example of a component factory. The settings CSV will include a special row to indicate that such a factory component should be activated, using a unique identifier, and then all the settings associated with that factory instance will have that unique identifier appended to its key
values.
Going back to that example CSV, this is the row that activates a Modbus I/O component instance with an identifier of 1
:
net.solarnetwork.node.io.modbus.FACTORY,1,1,0,2014-03-01 21:00:31\n
The synax for key
column is simply the service identifier followed by .FACTORY
. Then the type
and value
columns are both set the same unique identifier. In this example that identifier is 1
. For all settings specific to a factory component, the key
column will be the service identifier followed by .IDENTIFIER
where IDENTIFIER
is the unique instance identifier.
Here is an example that shows two factory instances configured: Lighting
and HVAC
. Each have a different serialParams.portName
setting value configured:
net.solarnetwork.node.io.modbus.Lighting,serialParams.portName,/dev/cuaU0,0,2014-03-01 21:01:31\nnet.solarnetwork.node.io.modbus.HVAC,serialParams.portName,/dev/ttyUSB0,0,2014-03-01 21:01:31\nnet.solarnetwork.node.io.modbus.FACTORY,Lighting,Lighting,0,2014-03-01 21:00:31\nnet.solarnetwork.node.io.modbus.FACTORY,HVAC,HVAC,0,2014-03-01 21:00:31\n
"},{"location":"users/settings/#auto-settings","title":"Auto settings","text":"SolarNode settings can also be configured through Auto Settings, applied when SolarNode starts up, by placing Settings CSV files in the /etc/solarnode/auto-settings.d
directory. These settings are applied only if they don't already exist or the modified date in the settings file is newer than the date they were previously applied.
SolarFlux is the name of a real-time cloud-based service for datum using a publish/subscribe integration model. SolarNode supports publishing datum to SolarFlux and your own applications can subscribe to receive datum messages as they are published.
SolarFlux is based on MQTT. To integrate with SolarFlux you use a MQTT client application or library. See the SolarFlux Integration Guide for more information.
"},{"location":"users/solarflux/#solarflux-upload-service","title":"SolarFlux Upload Service","text":"SolarNode provides the SolarFlux Upload Service plugin that posts datum captured by SolarNode plugins to SolarFlux.
"},{"location":"users/solarflux/#mqtt-message-format","title":"MQTT message format","text":"Each datum message is published as a CBOR encoded map by default, to a MQTT topic based on the datum's source ID. This is essentially a JSON object. The map keys are the datum property names. You can configure a Datum Encoder to encode datum into a different format, by configuring a filter. For example, the Protobuf Datum Encoder supports encoding datum into Protobuf messages.
Messages are published with the MQTT retained
flag set by default, which means the most recently published datum is saved by SolarFlux. When an application subscribes to a topic it will immediately receive any retained message for that topic. In this way, SolarFlux will provide a \"most recent\" snapshot of all datum across all nodes and sources.
{\n\"_DatumType\": \"net.solarnetwork.node.domain.ACEnergyDatum\",\n\"_DatumTypes\": [\n\"net.solarnetwork.node.domain.ACEnergyDatum\",\n\"net.solarnetwork.node.domain.EnergyDatum\",\n\"net.solarnetwork.node.domain.Datum\",\n\"net.solarnetwork.node.domain.GeneralDatum\"\n],\n\"apparentPower\": 2797,\n\"created\": 1545167905344,\n\"current\": 11.800409317016602,\n\"phase\": \"PhaseB\",\n\"phaseVoltage\": 409.89337158203125,\n\"powerFactor\": 1.2999000549316406,\n\"reactivePower\": -1996,\n\"realPower\": 1958,\n\"sourceId\": \"Ph2\",\n\"voltage\": 236.9553680419922,\n\"watts\": 1958\n}\n
"},{"location":"users/solarflux/#mqtt-message-topics","title":"MQTT message topics","text":"The MQTT topic each datum is published to is derived from the node ID and datum source ID, according to this pattern:
node/N/datum/A/S\n
Pattern Element Description N
The node ID the datum was captured on A
An aggregation key; will be 0
for the \"raw\" datum captured in SolarNode S
The datum source ID; note that any leading /
in the source ID is stripped from the topic Example MQTT topicsnode/1/datum/0/Meter\nnode/2/datum/0/Building1/Room1/Light1\nnode/2/datum/0/Building1/Room1/Light2\n
"},{"location":"users/solarflux/#log-datum-stream","title":"Log datum stream","text":"The EventAdmin
Appender is supported, and log events are turned into a datum stream and published to SolarFlux. The log timestamps are used as the datum timestamps.
The topic assigned to log events is log/
with the log name appended. Period characters (.
) in the log name are replaced with slash characters (/
). For example, a log name net.solarnetwork.node.datum.modbus.ModbusDatumDataSource
will be turned into the topic log/net/solarnetwork/node/datum/modbus/ModbusDatumDataSource
.
The datum stream consists of the following properties:
Property Class. Type Descriptionlevel
s
String The log level name, e.g. TRACE
, DEBUG
, INFO
, WARN
, ERROR
, or FATAL
. priority
i
Integer The log level priority (lower values have more priority), e.g. 600
, 500
, 400
, 300
, 200
, or 100
. name
s
String The log name. msg
s
String The log message . exMsg
s
String An exception message, if an exception was included. exSt
s
String A newline-delimited list of stack trace element values, if an exception was included."},{"location":"users/solarflux/#settings","title":"Settings","text":"The SolarFlux Upload Service ships with default settings that work out-of-the-box without any configuration. There are many settings you can change to better suit your needs, however.
Each component configuration contains the following overall settings:
Setting Description Host The URI for the SolarFlux server to connect to. Normally this isinflux.solarnetwork.net:8884
. Username The MQTT username to use. Normally this is solarnode
. Password The MQTT password to use. Normally this is not needed as the node's certificate it used for authentication. Exclude Properties A regular expression to match property names on all datum sources to exclude from publishing. Required Mode If configured, an operational mode that must be active for any data to be published. Maximum Republish If offline message persistence has been configured, then the maximum number of offline messages to publish in one go. See the offline persistence section for more information. Reliability The MQTT quality of service level to use. Normally the default of At most once is sufficient. Version The MQTT protocol version to use. Startig with version 5 MQTT topic aliases will be used if the server supports it, which can save a significant amount of network bandwidth when long source IDs are in use. Retained Toggle the MQTT retained message flag. When enabled the MQTT server will store the most recently published message on each topic so it is immediately available when clients connect. Wire Logging Toggle verbose logging on/off to support troubleshooting. The messages are logged to the net.solarnetwork.mqtt
topic at DEBUG
level. Filters Any number of datum filter configurations. For TLS-encrypted connections, SolarNode will make the node's own X.509 certificate available for client authentication.
"},{"location":"users/solarflux/#filter-settings","title":"Filter settings","text":"Each component can define any number of filters, which are used to manipulate the datum published to SolarFlux, such as:
The filter settings can be very useful to constrain how much data is sent to SolarFlux, for example on nodes using mobile internet connections where the cost of posting data is high.
A filter can configure a Datum Encoder to encode the MQTT message with, if you want to use a format other than the default CBOR encoding. This can be combined with a Source ID pattern to encode specific sources with specific encoders. For example when using the Protobuf Datum Encoder a single Protobuf message type is supported per encoder. If you want to encode different datum sources into different Protobuf messages, you would configure one encoder per message type, and then one filter per source ID with the corresponding encoder.
Note
All filters are applied in the order they are defined, and then the first filter with a Datum Encoder configured that matches the filter's Source ID pattern will be used to encode the datum. If not Datum Encoder is configured the default CBOR encoding will be used.
Each filter configuration contains the following settings:
Setting Description Source ID A case-insensitive regular expression to match against datum source IDs. If defined, this filter will only be applied to datum with matching source ID values. If not defined this filter will be applied to all datum. For example^solar
would match any source ID starting with solar. Datum Filter The Service Name of a Datum Filter component to apply before encoding and posting datum. Required Mode If configured, an operational mode that must be active for this filter to be applied. Datum Encoder The Service Name if a Datum Encoder component to encode datum with. The encoder will be passed a java.util.Map
object with all the datum properties. If not configured then CBOR will be used. Limit Seconds The minimum number of seconds to limit datum that match the configured Source ID pattern. If datum are produced faster than this rate, they will be filtered out. Set to 0
or leave empty for no limit. Property Includes A list of case-insensitive regular expressions to match against datum property names. If configured, only properties that match one of these expressions will be included in the filtered output. For example ^watt
would match any property starting with watt. Property Excludes A list of case-insensitive regular expressions to match against datum property names. If configured, any property that match one of these expressions will be excluded from the filtered output. For example ^temp
would match any property starting with temp. Exclusions are applied after property inclusions. Warning
The datum sourceId
and created
properties will be affected by the property include/exclude filters! If you define any include filters, you might want to add an include rule for ^created$
. You might like to have sourceId
removed to conserve bandwidth, given that value is part of the MQTT topic the datum is posted on and thus redundant.
By default if the connection to the SolarFlux server is down for any reason, all messages that would normally be published to the server will be discarded. This is suitable for most applications that rely on SolarFlux to view real-time status updates only, and SolarNode uploads datum to SolarNet for long-term persistence. For applications that rely on SolarFlux for more, it might be desirable to configure SolarNode to locally cache SolarFlux messages when the connection is down, and then publish those cached messages when the connection is restored. This can be accomplished by deploying the MQTT Persistence plugin.
When that plugin is available, all messages processed by this service will be saved locally when the MQTT connection is down, and then posted once the MQTT connection comes back up. Note the following points to consider:
false
.TODO
"},{"location":"users/datum-filters/","title":"Datum Filters","text":"Datum Filters are services that manipulate datum generated by SolarNode plugins before they are uploaded to SolarNet. Datum Filters vary wildly in the functionality they provide; here are some examples of the things they can do:
Datum Filters do not create datum
It is helpful to remember that Datum Filters do not create datum, they only manipulate datum created elsewhere, typically by datum data sources.
There are four main places where datum filters can be applied:
All datum generated by SolarNode plugins are added to the Datum Queue for processing. The datum are processed in the order they are added to the queue. Datum Filters are applied to each datum, each filter's result passed to the next available filter until all filters have been applied.
Conceptual diagram of the Datum Queue, processing datum along with filters manipulating them
At the end of processing, the datum is either
Most of the time datum are uploaded to SolarNet immediately after processing. If the network is down, or SolarNode is configured to only upload datum in batches, then datum are saved locally in SolarNode, and a periodic job will attempt to upload them later on, in batches.
See the Setup App Datum Queue section for information on how to configure the Datum Queue.
When to configure filters on the Datum Queue, as opposed to other places?
The Datum Queue is a great place to configure filters that must be processed at most once per datum, and do not depend on what time the datum is uploaded to SolarNet.
"},{"location":"users/datum-filters/#global-datum-filters","title":"Global Datum Filters","text":"Global Datum Filters are applied to datum just before posting to SolarNetwork. Once an instance is created, it is automatically active and will be applied to datum. This differs from User Datum Filters, which must be explicitly added to a service to be used, either dircectly or indirectly with a Datum Filter Chain.
Note
Some filters support both Global and User based filter configuration, and often you can achieve the same overall result in multiple ways. Global filters are convenient for the subset of filters that support Global configuration, but for complex filtering often it can be easier to configure all filters as User filters, using the Global Datum Filter Chain as needed.
"},{"location":"users/datum-filters/#global-datum-filter-chain","title":"Global Datum Filter Chain","text":"The Global Datum Filter Chain provides a way to apply explicit User Datum Filters to datum just before posting to SolarNetwork.
"},{"location":"users/datum-filters/#solarflux-datum-filters","title":"SolarFlux Datum Filters","text":"TODO
"},{"location":"users/datum-filters/chain/","title":"Filter Chain","text":"The Datum Filter Chain is a User Datum Filter that you configure with a list, or chain, of other User Datum Filters. When the Filter Chain executes, it executes each of the configured Datum Filters, in the order defined. This filter can be used like any other Datum Filter, allowing multiple filters to be applied in a defined order.
A Filter Chain acts like an ordered group of Datum Filters
Tip
Some services support configuring only a single Datum Filter setting. You can use a Filter Chain to apply multiple filters in those services.
"},{"location":"users/datum-filters/chain/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Available Filters A read-only list of Service Name values of User Datum Filter components that have been configured. You can copy any value from this list and paste it into the Datum Filters list to include that filter in the chain. Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Datum Filters The list of Service Name values of User Datum Filter components to apply to datum."},{"location":"users/datum-filters/control-updater/","title":"Control Updater Datum Filter","text":"The Control Updater Datum Filter provides a way to update controls with the result of an expression, optionally populating the expression result as a datum property.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/control-updater/#settings","title":"Settings","text":" The screen shot shows a filter that would toggle the /power/switch/1
control on/off based on the frequency
property in the /power/1
datum stream: on when the frequency is 50 or higher, off otherwise.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Control Configurations A list of control expression configurations. Each control configuration contains the following settings:
Setting Description Control ID The ID of the control to update with the expression result. Property The optional datum property to store the expression result in. Property Type The datum property type to use. Expression The expression to evaluate. See below for more info. Expression Language The expression language to write Expression in."},{"location":"users/datum-filters/control-updater/#expressions","title":"Expressions","text":"See the Expressions guide for general expressions reference. The root object is a DatumExpressionRoot
that lets you treat all datum properties, and filter parameters, as expression variables directly.
The Downsample Datum Filter provides a way to down-sample higher-frequency datum samples into lower-frequency (averaged) datum samples. The filter will collect a configurable number of samples and then generate a down-sampled sample where an average of each collected instantaneous property is included. In addition minimum and maximum values of each averaged property are added.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/downsample/#settings","title":"Settings","text":"Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Sample Count The number of samples to average over. Decimal Scale A maximum number of digits after the decimal point to round to. Set to0
to round to whole numbers. Property Excludes A list of property names to exclude. Min Property Template A string format to use for computed minimum property values. Use %s
as the placeholder for the original property name, e.g. %s_min
. Max Property Template A string format to use for computed maximum property values. Use %s
as the placeholder for the original property name, e.g. %s_max
."},{"location":"users/datum-filters/expression/","title":"Expression Datum Filter","text":"The Expression Datum Filter provides a way to generate new properties by evaluating expressions against existing properties.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/expression/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Expressions A list of expression configurations that are evaluated to derive datum property values from other property values. Use the + and - buttons to add/remove expression configurations.
"},{"location":"users/datum-filters/expression/#expression-settings","title":"Expression settings","text":"Each expression configuration contains the following settings:
Setting Description Property The datum property to store the expression result in. Property Type The datum property type to use. Expression The expression to evaluate. See below for more info. Expression Language The expression language to write Expression in."},{"location":"users/datum-filters/expression/#expressions","title":"Expressions","text":"See the SolarNode Expressions guide for general expressions reference. The root object is a DatumExpressionRoot
that lets you treat all datum properties, and filter parameters, as expression variables directly, along with the following properties:
datum
Datum
A Datum
object, populated with data from all property and virtual meter configurations. props
Map<String,Object>
Simple Map based access to the properties in datum
, and transform parameters, to simplify expressions. The following methods are available:
Function Arguments Result Descriptionhas(name)
String
boolean
Returns true
if a property named name
is defined. hasLatest(source)
String
boolean
Returns true
if a datum with source ID source
is available via the latest(source)
function. latest(source)
String
DatumExpressionRoot
for the latest available datum matching the given source ID, or null
if not available."},{"location":"users/datum-filters/expression/#expression-examples","title":"Expression examples","text":"Assuming a datum sample with properties like the following:
Property Valuecurrent
7.6
voltage
240.1
status
Error
Then here are some example expressions and the results they would produce:
Expression Result Commentvoltage * current
1824.76
Simple multiplication of two properties. props['voltage'] * props['current']
1824.76
Another way to write the previous expression. Can be useful if the property names contain non-alphanumeric characters, like spaces. has('frequency') ? 1 : null
null
Uses the ?:
if/then/else operator to evaluate to null
because the frequency
property is not available. When an expression evaluates to null
then no property will be added to the output samples. current > 7 or voltage > 245 ? 1 : null
1
Uses comparison and logic operators to evaluate to 1
because current
is greater than 7
. voltage * currrent * (hasLatest('battery') ? 1.0 - latest('battery')['soc'] : 1)
364.952
Assuming a battery
datum with a soc
property value of 0.8
then the expression resolves to 7.6 * 241.0 * (1.0 - 0.8)
."},{"location":"users/datum-filters/join/","title":"Join Datum Filter","text":"The Join Datum Filter provides a way to merge the properties of multiple datum streams into a new derived datum stream.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/join/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Output Source ID The source ID of the merged datum stream. Placeholders are allowed. Coalesce Threshold When 2
or more then wait until datum from this many different source IDs have been encountered before generating an output datum. Once a coalesced datum has been generated the tracking of input sources resets and another datum will only be generated after the threshold is met again. If 1
or less, then generate output datum for all input datum. Swallow Input If enabled, then filter out input datum after merging. Otherwise leave the input datum as-is. Source Property Mappings A list of source IDs with associated property name templates to rename the properties with. Each template must contain a {p}
parameter which will be replaced by the property names merged from datum encountered with the associated source ID. For example {p}_s1
would map an input property watts
to watts_s1
. Use the + and - buttons to add/remove expression configurations.
"},{"location":"users/datum-filters/join/#source-property-mappings-settings","title":"Source Property Mappings settings","text":"Each source property mapping configuration contains the following settings:
Setting Description Source ID A source ID pattern to apply the associated Mapping to. Any capture groups (parts of the pattern between()
groups) are provided to the Mapping template. Mapping A property name template with a {p}
parameter for an input property name to be mapped to a merged (output) property name. Pattern capture groups from Source ID are available starting with {1}
. For example {p}_s1
would map an input property watts
to watts_s1
. Unmapped properties are copied
If a matching source property mapping does not exist for an input datum source ID then the property names of that datum are used as-is.
"},{"location":"users/datum-filters/join/#source-mapping-examples","title":"Source mapping examples","text":"The Source ID pattern can define capture groups that will be provided to the Mapping template as numbered parameters, starting with {1}
. For example, assuming an input datum property watts
, then:
/power/main
/power/
{p}_main
watts_main
/power/1
/power/(\\d+)$
{p}_s{1}
watts_s1
/power/2
/power/(\\d+)$
{p}_s{1}
watts_s2
/solar/1
/(\\w+)/(\\d+)$
{p}_{1}{2}
watts_solar1
To help visualize property mapping with a more complete example, let's imagine we have some datum streams being collected and the most recent datum from each look like this:
/meter/1 /meter/2 /solar/1{\n \"watts\": 3213,\n}
{\n \"watts\": -842,\n}
{\n \"watts\" : 4055,\n \"current\": 16.89583\n}
Here are some examples of how some source mapping expressions could be defined, including how multiple mappings can be used at once:
Source ID Patterns Mappings Result/(\\w+)/(\\d+)
{1}_{p}{2}
{\n \"power_watts1\" : 3213,\n \"power_watts2\" : -842,\n \"solar_watts1\" : 4055,\n \"solar_current\" : 16.89583\n}
/power/(\\d+)
/solar/1
{p}_{1}
{p}
{\n \"watts_1\" : 3213,\n \"watts_2\" : -842,\n \"watts\" : 4055,\n \"current\" : 16.89583\n}"},{"location":"users/datum-filters/op-mode/","title":"Operational Mode Datum Filter","text":"
The Operational Mode Datum Filter provides a way to evaluate expressions to toggle operational modes. When an expression evaluates to true
the associated operational mode is activated. When an expression evaluates to false
the associated operational mode is deactivated.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/op-mode/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Expressions A list of expression configurations that are evaluated to toggle operational modes. Use the + and - buttons to add/remove expression configurations.
"},{"location":"users/datum-filters/op-mode/#expression-settings","title":"Expression settings","text":"Each expression configuration contains the following settings:
Setting Description Mode The operational mode to toggle. Expire Seconds If configured and greater than0
, the number of seconds after activating the operational mode to automatically deactivate it. If not configured or 0
then the operational mode will be deactivated when the expression evaluates to false
. See below for more information. Property If configured, the datum property to store the expression result in. See below for more information. Property Type The datum property type to use if Property is configured. See below for more information. Expression The expression to evaluate. See below for more info. Expression Language The expression language to write Expression in."},{"location":"users/datum-filters/op-mode/#expire-setting","title":"Expire setting","text":"When configured the expression will never deactivate the operational mode directly. When evaluating the given expression, if it evaluates to true
the mode will be activated and configured to deactivate after this many seconds. If the operation mode was already active, the expiration will be extended by this many seconds.
This configuration can be thought of like a time out as used on motion-detecting lights: each time motion is detected the light is turned on (if not already on) and a timer set to turn the light off after so many seconds of no motion being detected.
Note that the operational modes service might actually deactivate the given mode a short time after the configured expiration.
"},{"location":"users/datum-filters/op-mode/#property-setting","title":"Property setting","text":"A property does not have to be populated. If you provide a Property name to populate, the value of the datum property depends on property type configured:
Type Description Instantaneous The property value will be1
or 0
based on true
and false
expression results. Status The property will be the expression result, so true
or false
. Tag A tag named as the configured property will be added if the expression is true
, or removed if false
."},{"location":"users/datum-filters/op-mode/#expressions","title":"Expressions","text":"See the Expressions section for general expressions reference. The expression must evaluate to a boolean (true
or false
) result. When it evaluates to true
the configured operational mode will be activated. When it evaluates to false
the operational mode will be deactivated (unless an expire setting has been configured).
The root object is a datum samples expression object that lets you treat all datum properties, and filter parameters, as expression variables directly, along with the following properties:
Property Type Descriptiondatum
GeneralNodeDatum
A GeneralNodeDatum
object, populated with data from all property and virtual meter configurations. props
Map<String,Object>
Simple Map based access to the properties in datum
, and transform parameters, to simplify expressions. The following methods are available:
Function Arguments Result Descriptionhas(name)
String
boolean
Returns true
if a property named name
is defined."},{"location":"users/datum-filters/op-mode/#expression-examples","title":"Expression examples","text":"Assuming a datum sample with properties like the following:
Property Valuecurrent
7.6
voltage
240.1
status
Error
Then here are some example expressions and the results they would produce:
Expression Result Commentvoltage * current > 1800
true
Since voltage * current
is 1824.76, the expression is true
. status != 'Error'
false
Since status
is Error
the expression is false
."},{"location":"users/datum-filters/parameter-expression/","title":"Parameter Expression Datum Filter","text":"The Parameter Expression Datum Filter provides a way to generate filter parameters by evaluating expressions against existing properties. The generated parameters will be available to any further datum filters in the same filter chain.
Tip
Parameters are useful as temporary variables that you want to use during datum processing but do not want to include as datum properties that get posted to SolarNet.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/parameter-expression/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Expressions A list of expression configurations that are evaluated to derive parameter values from other property values. Use the + and - buttons to add/remove expression configurations.
"},{"location":"users/datum-filters/parameter-expression/#expression-settings","title":"Expression settings","text":"Each expression configuration contains the following settings:
Setting Description Parameter The filter parameter name to store the expression result in. Expression The expression to evaluate. See below for more info. Expression Language The [expression language][expr-lang] to write Expression in."},{"location":"users/datum-filters/parameter-expression/#expressions","title":"Expressions","text":"See the Expressions section for general expressions reference. This filter supports Datum Expressions that lets you treat all datum properties, and filter parameters, as expression variables directly.
"},{"location":"users/datum-filters/property/","title":"Property Datum Filter","text":"The Property Datum Filter provides a way to remove properties of datum. This can help if some component generates properties that you don't actually need to use.
For example you might have a plugin that collects data from an AC power meter that capture power, energy, quality, and other properties each time a sample is taken. If you are only interested in capturing the power and energy properties you could use this component to remove all the others.
This component can also throttle individual properties over time, so that individual properties are posted less frequently than the rate the whole datum it is a part of is sampled at. For example a plugin for an AC power meter might collect datum once per minute, and you want to collect the energy properties of the datum every minute but the quality properties only once every 10 minutes.
The general idea for filtering properties is to configure rules that define which datum sources you want to filter, along with a list of properties to include and/or a list to exclude. All matching is done using regular expressions, which can help make your rules concise.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/property/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Property Includes A list of property names to include, removing all others. This is a list of case-insensitive patterns to match against datum property names. If any inclusion patterns are configured then only properties matching one of these patterns will be included in datum. Any property name that does not match one of these patterns will be removed. Property Excludes A list of property names to exclude. This is a list of case-insensitive patterns to match against datum property names. If any exclusion expressions are configured then any property that matches one of these expressions will be removed. Exclusion epxressions are processed after inclusion expressions when both are configured. Use the + and - buttons to add/remove property include/exclude patterns.
Each property inclusion setting contains the following settings:
Setting Description Name The property name pattern to include. Limit Seconds A throttle limit, in seconds, to apply to included properties. The minimum number of seconds to limit properties that match the configured property inclusion pattern. If properties are produced faster than this rate, they will be filtered out. Leave empty (or0
) for no throttling."},{"location":"users/datum-filters/split/","title":"Split Datum Filter","text":"The Split Datum Filter provides a way to split the properties of a datum stream into multiple new derived datum streams.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/split/#settings","title":"Settings","text":"In the example screen shot shown above, the /power/meter/1
datum stream is split into two datum streams: /meter/1/power
and /meter/1/energy
. Properties with names containing current
, voltage
, or power
(case-insensitive) will be copied to /meter/1/power
. Properties with names containing hour
(case-insensitive) will be copied to /meter/1/energy
.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Swallow Input If enabled, then discard input datum after splitting. Otherwise leave the input datum as is. Property Source Mappings A list of property name regular expression with associated source IDs to copy matching properties to."},{"location":"users/datum-filters/split/#property-source-mappings-settings","title":"Property Source Mappings settings","text":"Use the + and - buttons to add/remove Property Source Mapping configurations.
Each property source mapping configuration contains the following settings:
Setting Description Property A property name case-sensitive regular expression to match on the input datum stream. You can enable case-insensitive matching by including a(?i)
prefix. Source ID The destination source ID to copy the matching properties to. Supports placeholders. Tip
If multiple property name expressions match the same property name, that property will be copied to all the datum streams of the associated source IDs.
"},{"location":"users/datum-filters/tariff/","title":"Time-based Tariff Datum Filter","text":"The Tariff Datum Filter provides a way to inject time-based tariff rates based on a flexible tariff schedule defined with various time constraints.
This filter is provided by the Tariff Filter plugin.
"},{"location":"users/datum-filters/tariff/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Metadata Service The Service Name of the Metadata Service to obtain the tariff schedule from. See below for more information. Metadata Path The metadata path that will resolve the tariff schedule from the configured Metadata Service. Language An IETF BCP 47 language tag to parse the tariff data with. If not configured then the default system language will be assumed. First Match If enabled, then apply only the first tariff that matches a given datum date. If disabled, then apply all tariffs that match. Schedule Cache The amount of seconds to cache the tariff schedule obtained from the configured Metadata Service. Tariff Evaluator The Service Name of a Time-based Tariff Evaluator service to evaluate each tariff to determine if it should apply to a given datum. If not configured a default algorithm is used that matches all non-empty constraints in an inclusive manner, except for the time-of-day constraint which uses an exclusive upper bound."},{"location":"users/datum-filters/tariff/#metadata-service","title":"Metadata Service","text":"SolarNode provides a User Metadata Service component that this filter can use for the Metadata Service setting. This allows you to configure the tariff schedule as user metadata in SolarNetwork and then SolarNode will download the schedule and use it as needed.
You must configure a SolarNetwork security token to use the User Metadata Service. We recommend that you create a Data security token in SolarNetwork with a limited security policy that includes an API Path of just /users/meta
and a User Metadata Path of something granular like /pm/tariffs/**
. This will give SolarNode access to just the tariff metadata under the /pm/tariffs
metadata path.
The SolarNetwork API Explorer can be used to add the necessary tariff schedule metadata to your account. For example:
"},{"location":"users/datum-filters/tariff/#tariff-schedule-format","title":"Tariff schedule format","text":"The tariff schedule obtained from the configured Metadata Service uses a simple CSV-based format that can be easily exported from a spreadsheet. Each row represents a rule that includes:
Include a header row
A header row is required because the tariff rate names are defined there. The first 4 column names are ignored.
The schedule consists of 4 time constraint columns followed by one or more tariff rate columns. Each constraint is represented as a range, in the form start - end
. Whitespace is allowed around the -
character. If the start
and end
are the same, the range may be shortened to just start
. A range can be left empty to represent all values. The time constraint columns are:
1
and Sunday being 7
, or abbreviations (Mon-Sun) or full names (Monday - Sunday). When using text names case does not matter and they will be parsed using the Lanauage setting. 4 Time range An inclusive - exclusive time-of-day range. The time can be specified as whole hour numbers (0-24) or HH:MM
style (00:00
- 24:00
). Starting on column 5 of the tariff schedule are arbitrary rate values to add to datum when the corresponding constraints are satisfied. The name of the datum property is derived from the header row of the column, adapted according to the following rules:
Here are some examples of the header name to the equivalent property name:
Rate Header Name Datum Property Name TOUtou
Foo Bar foo_bar
This Isn't A Great Name! this_isn_t_a_great_name
"},{"location":"users/datum-filters/tariff/#example-schedule","title":"Example schedule","text":"Here's an example schedule with 4 rules and a single TOU rate (the *
stands for all values):
In CSV format the schedule would look like this:
Month,Day,Weekday,Time,TOU\nJan-Dec,,Mon-Fri,0-8,10.48\nJan-Dec,,Mon-Fri,8-24,11.00\nJan-Dec,,Sat-Sun,0-8,9.19\nJan-Dec,,Sat-Sun,8-24,11.21\n
When encoding into SolarNetwork metadata JSON, that same schedule would look like this when saved at the /pm/tariffs/schedule
path:
{\n\"pm\": {\n\"tariffs\": {\n\"schedule\": \"Month,Day,Weekday,Time,TOU\\nJan-Dec,,Mon-Fri,0-8,10.48\\nJan-Dec,,Mon-Fri,8-24,11.00\\nJan-Dec,,Sat-Sun,0-8,9.19\\nJan-Dec,,Sat-Sun,8-24,11.21\"\n}\n}\n}\n
"},{"location":"users/datum-filters/throttle/","title":"Throttle Datum Filter","text":"The Throttle Datum Filter provides a way to throttle entire datum over time, so that they are posted to SolarNetwork less frequently than a plugin that collects the data produces them. This can be useful if you need a plugin to collect data at a high frequency for use internally by SolarNode but don't need to save such high resolution of data in SolarNetwork. For example, a plugin that monitors a device and responds quickly to changes in the data might be configured to sample data every second, but you only want to capture that data once per minute in SolarNetwork.
The general idea for filtering datum is to configure rules that define which datum sources you want to filter, along with time limit to throttle matching datum by. Any datum matching the sources that are captured faster than the time limit will filtered and not uploaded to SolarNetwork.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/throttle/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Limit Seconds A throttle limit, in seconds, to apply to matching datum. The throttle limit is applied to datum by source ID. Before each datum is uploaded to SolarNetwork, the filter will check how long has elapsed since a datum with the same source ID was uploaded. If the elapsed time is less than the configured limit, the datum will not be uploaded."},{"location":"users/datum-filters/unchanged-property/","title":"Unchanged Property Filter","text":"The Unchanged Property Filter provides a way to discard individual datum properties that have not changed within a datum stream.
This filter is provided by the Standard Datum Filters plugin.
Tip
See the Unchanged Datum Filter for a filter that can discard entire unchanging datum (at the source ID level).
"},{"location":"users/datum-filters/unchanged-property/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Default Unchanged Max Seconds When greater than 0
then the maximum number of seconds to discard unchanged properties within a single datum stream (source ID). Use this setting to ensure a property is included occasionally, even if the property value has not changed. Having at least one value per hour in a datum stream is recommended. This time period is always relative to the last unfiltered property within a given datum stream seen by the filter. Property Configurations A list of property settings."},{"location":"users/datum-filters/unchanged-property/#property-settings","title":"Property Settings","text":"Use the + and - buttons to add/remove Property configurations.
Each property source mapping configuration contains the following settings:
Setting Description Property A regular expression pattern to match against datum property names. All matching properties will be filtered. Unchanged Max Seconds When greater than0
then the maximum number of seconds to discard unchanged properties within a single datum stream (source ID). This can be used to override the filter-wide Default Unchanged Max Seconds setting, or left blank to use the default value."},{"location":"users/datum-filters/unchanged/","title":"Unchanged Datum Filter","text":"The Unchanged Datum Filter provides a way to discard entire datum that have not changed within a datum stream.
This filter is provided by the Standard Datum Filters plugin.
Tip
See the Unchanged Property Filter for a filter that can discard individual unchanging properties within a datum stream.
"},{"location":"users/datum-filters/unchanged/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Unchanged Max Seconds When greater than 0
then the maximum number of seconds to refrain from publishing an unchanged datum within a single datum stream. Use this setting to ensure a datum is included occasionally, even if the datum properties have not changed. Having at least one value per hour in a datum stream is recommended. This time period is always relative to the last unfiltered property within a given datum stream seen by the filter. Property Pattern A property name pattern that limits the properties monitored for changes. Only property names that match this expression will be considered when determining if a datum differs from the previous datum within the datum stream."},{"location":"users/datum-filters/virtual-meter/","title":"Virtual Meter Datum Filter","text":"The Virtual Meter Datum Filter provides a way to derive an accumulating \"meter reading\" value out of an instantaneous property value over time. For example, if you have an irradiance sensor that allows you to capture instantaneous W/m2 power values, you could configure a virtual meter to generate Wh/m2 energy values.
Each virtual meter works with a single input datum property, typically an instantaneous property. The derived accumulating datum property will be named after that property with the time unit suffix appended. For example, an instantaneous irradiance
property using the Hours
time unit would result in an accumulating irradianceHours
property. The value is calculated as an average between the current and the previous instantaneous property values, multiplied by the amount of time that has elapsed between the two samples.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/virtual-meter/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Virtual Meters Configure as many virtual meters as you like, using the + and - buttons to add/remove meter configurations."},{"location":"users/datum-filters/virtual-meter/#virtual-meter-settings","title":"Virtual Meter Settings","text":"The Virtual Meter settings define a single virutal meter.
Setting Description Property The name of the input datum property to derive the virtual meter values from. Property Type The type of the input datum property. Typically this will beInstantaneous
but when combined with an expression an Accumulating
property can be used. Reading Property The name of the output meter accumulating datum property to generate. Leave empty for a default name derived from Property and Time Unit. For example, an instantaneous irradiance
property using the Hours
time unit would result in an accumulating irradianceHours
property. Time Unit The time unit to record meter readings as. This value affects the name of the virtual meter reading property if Reading Property is left blank: it will be appended to the end of Property Name. It also affects the virtual meter output reading values, as they will be calculated in this time unit. Max Age The maximum time allowed between samples where the meter reading can advance. In case the node is not collecting samples for a period of time, this setting prevents the plugin from calculating an unexpectedly large reading value jump. For example if a node was turned off for a day, the first sample it captures when turned back on would otherwise advance the reading as if the associated instantaneous property had been active over that entire time. With this restriction, the node will record the new sample date and value, but not advance the meter reading until another sample is captured within this time period. Decimal Scale A maximum number of digits after the decimal point to round to. Set to 0
to round to whole numbers. Track Only On Change When enabled, then only update the previous reading date if the new reading value differs from the previous one. Rolling Average Count A count of samples to average the property value from. When set to something greater than 1
, then apply a rolling average of this many property samples and output that value as the instantaneous source property value. This has the effect of smoothing the instantaneous values to an average over the time period leading into each output sample. Defaults to 0
so no rolling average is applied. Add Instantaneous Difference When enabled, then include an output instantaneous property of the difference between the current and previous reading values. Instantaneous Difference Property The derived output instantaneous datum property name to use when Add Instantaneous Difference is enabled. By default this property will be derived from the Reading Property value with Diff
appended. Reading Value You can reset the virtual meter reading value with this setting. Note this is an advanced operation. If you submit a value for this setting, the virtual meter reading will be reset to this value such that the next datum the reading is calculated for will use this as the current meter reading. This will impact the datum stream's reported aggregate values, so you should be very sure this is something you want to do. For example if the virtual meter was at 1000
and you reset it 0
then that will appear as a -1000
drop in whatever the reading is measuring. If this occurs you can create a Reset
Datum auxiliary record to accomodate the reset value. Expressions Configure as many expressions as you like, using the + and - buttons to add/remove expression configurations."},{"location":"users/datum-filters/virtual-meter/#virtual-meter-expression-settings","title":"Virtual Meter Expression Settings","text":"A virtual meter can use expressions to customise how the output meter value reading value is calculated. See the Expressions section for more information.
Setting Description Property The datum property to store the expression result in. This must match the Reading Property of a meter configuration. Keep in mind that if Reading Property is blank, the implied value is derived from Property and Time Unit. Property Type The datum property type to use. Expression The expression to evaluate. See below for more info. Expression Language The expression language to write Expression in."},{"location":"users/datum-filters/virtual-meter/#filter-parameters","title":"Filter parameters","text":"When the virtual meter filter is applied to a given datum, it will generate the following filter parameters, which will be available to other filters that are applied to the same datum after this filter.
Parameter Description{inputPropertyName}_diff
The difference between the current input property value and the previous input property value. The {inputPropertyName}
part of the parameter name will be replaced by the actual input property name. For example irradiance_diff
. {meterPropertyName}_diff
The difference between the current output meter property value and the previous output meter property value. The {meterPropertyName}
part of the parameter name will be replaced by the actual output meter property name. For example irradianceHours_diff
."},{"location":"users/datum-filters/virtual-meter/#expressions","title":"Expressions","text":"Expressions can be configured to calculate the output meter datum property, instead of using the default averaging algorithm. If an expression configuration exists with a Property that matches a configured (or implied) meter configuration Reading Property, then the expression will be invoked to generate the new meter reading value. See the Expressions guide for general expression language reference.
Warning
It is important to remember that the expression must calculate the next meter reading value. Typically this means it will calculate some differential value based on the amount of time that has elapsed and add that to the previous meter reading value.
"},{"location":"users/datum-filters/virtual-meter/#expression-root-object","title":"Expression root object","text":"The root object is a virtual meter expression object that lets you treat all datum properties, and filter parameters, as expression variables directly, along with the following properties:
Property Type Descriptionconfig
VirtualMeterConfig
A VirtualMeterConfig
object for the virtual meter configuration the expression is evaluating for. datum
GeneralNodeDatum
A Datum
object, populated with data from all property and virtual meter configurations. props
Map<String,Object>
Simple Map based access to the properties in datum
, and transform parameters, to simplify expressions. currDate
long
The current datum timestamp, as a millisecond epoch number. prevDate
long
The previous datum timestamp, as a millisecond epoch number. timeUnits
decimal
A decimal number of the difference between currDate
and prevDate
in the virtual meter configuration's Time Unit, rounded to at most 12 decimal digits. currInput
decimal
The current input property value. prevInput
decimal
The previous input property value. inputDiff
decimal
The difference between the currInput
and prevInput
values. prevReading
decimal
The previous output meter property value. The following methods are available:
Function Arguments Result Descriptionhas(name)
String
boolean
Returns true
if a property named name
is defined. timeUnits(scale)
int
decimal
Like the timeUnits
property but rounded to a specific number of decimal digits."},{"location":"users/datum-filters/virtual-meter/#expression-example-time-of-use-tariff-reading","title":"Expression example: time of use tariff reading","text":"Iagine you'd like to track a time-of-use cost associated with the energy readings captured by an energy meter. The Time-based Tariff Datum Filter filter could be used to add a tou
property to each datum, and then a virtual meter expression can be used to calculate a cost
reading property. The cost
property will be an accumulating property like any meter reading, so when SolarNetwork aggregates its value over time you will see the effective cost over each aggregate time period.
Here is a screen shot of the settings used for this scenario (note how the Reading Property value matches the Expression Property value):
The important settings to note are:
Setting Notes Virtual Meter - Property The input datum property is set towattHours
because we want to track changes in this property over time. Virtual Meter - Property Type We use Accumulating
here because that is the type of property wattHours
is. Virtual Meter - Reading Property The output reading property name. This must match the Expression - Property setting. Expression - Property This must match the Virtual Meter - Reading Property we want to evaluate the expression for. Expression - Property Type Typically this should be Accumulating
since we are generating a meter reading style property. Expression - Expression The expression to evaluate. This expression looks for the tou
property and when found the meter reading is incremented by the difference between the current and previous input wattHours
property values multiplied by tou
. If tou
is not available, then the previous meter reading value is returned (leaving the reading unchanged). Assuming a datum sample with properties like the following:
Property Valuetou
11.00
currDate
1621380669005
prevDate
1621380609005
timeUnits
0.016666666667
currInput
6095574
prevInput
6095462
inputDiff
112
prevReading
1022.782
Then here are some example expressions and the results they would produce:
Expression Result CommentinputDiff / 1000
0.112
Convert the input Wh property difference to kWh. inputDiff / 1000 * tou
1.232
Multiply the input kWh by the the $/kWh tariff value to calculate the cost for the elapsed time period. prevReading + (inputDiff / 1000 * tou)
1,024.014
Add the additional cost to the previous meter reading value to reach the new meter value."},{"location":"users/setup-app/","title":"Setup App","text":"The SolarNode Setup App allows you to manage SolarNode through a web browser.
To access the Setup App, you need to know the network address of your SolarNode. In many cases you can try accessing http://solarnode/. If that does not work, you need to find the network address SolarNode is using.
Here is an example screen shot of the SolarNode Setup App:
"},{"location":"users/setup-app/certificates/","title":"Certificates","text":"TODO
"},{"location":"users/setup-app/home/","title":"Home","text":"TODO
"},{"location":"users/setup-app/login/","title":"Login","text":"You must log in to SolarNode to access its functions. The login credentials will have been created when you first set up SolarNode and associated it with your SolarNetwork account. The default Username will be your SolarNetwork account email address, and the password will have been randomly generated and shown to you.
Tip
You can change your SolarNode username and password after logging in. Note these credentials are not related, or tied to, your SolarNetwork login credentials.
"},{"location":"users/setup-app/plugins/","title":"Plugins","text":"TODO
"},{"location":"users/setup-app/profile/","title":"Profile","text":"The profile menu in the top-right of the Setup App menu give you access to change you password, change you username, logout, restart, and reset SolarNode.
Tip
Your SolarNode credentials are not related, or tied to, your SolarNetwork login credentials. Changing your SolarNode username or password does not change your SolarNetwork credentials.
The profile menu in SolarNode
"},{"location":"users/setup-app/profile/#change-password","title":"Change Password","text":"Choosing the Change Password menu item will take you to a form for changing your password. Fill in your current password and then your new password, then click the Submit Password button.
The Change Password form
As a result, you will stay on the same page, but a success (or error) message will be shown above the form:
"},{"location":"users/setup-app/profile/#change-username","title":"Change Username","text":"Choosing the Change Username menu item will take you to a form for changing your SolarNode username. Fill in your current password and then your new password, then click the Change Username button.
The Change Username form
As a result, you will stay on the same page, but a success (or error) message will be shown above the form:
"},{"location":"users/setup-app/profile/#logout","title":"Logout","text":"Choosing the Logout menu item will immediately end your SolarNode session and log you out. You will ned to log in again to use the Setup App further.
"},{"location":"users/setup-app/profile/#restart","title":"Restart","text":"You can either restart or reboot SolarNode from the Restart SolarNode menu. A restart means the SolarNode app will restart, while a reboot means the entire SolarNodeOS device will shut down and boot up again (restarting SolarNode along the way).
You might need to restart SolarNode to pick up new plugins you've installed, and you might need to reboot SolarNode if you've attached new sensors or other devices that require operating system support.
The Restart SolarNode menu brings up this dialog.
"},{"location":"users/setup-app/profile/#reset","title":"Reset","text":"You can perform a \"factory reset\" of SolarNode to remove all your custom settings, certificate, login credentials, and so on. You also have the option to preserve some SolarNodeOS settings like WiFi credentials if you like.
The Reset SolarNode menu brings up this dialog.
"},{"location":"users/setup-app/settings/","title":"Settings","text":"The Settings section in SolarNode Setup is where you can configure all available SolarNode settings.
The section is divided into the following pages:
This page allows you to backup and restore the configuration of your SolarNode.
"},{"location":"users/setup-app/settings/backups/#settings-backup-restore","title":"Settings Backup & Restore","text":"The Settings Backup & Restore section provides a way to manage Settings Files and Settings Resources, both of which are backups for the configured settings in SolarNode.
Warning
Settings Files and Settings Resources do not include the node's certificate, login credentials, or custom plugins. See the Full Backup & Restore section for managing \"full\" backups that do include those items.
The Export button allows you to download a Settings File with the currently active configuration.
The Import button allows you to upload a previously-downloaded Settings File.
The Settings Resource menu allows you to download specialized settings files, offered by some components in SolarNode. For example the Modbus Device Datum Source plugin offers a specialized CSV file format to make configuring those components easier.
The Auto backups area will have a list of links, each of which will let you download a Settings File that SolarNode automatically created. Each link shows you the date the settings backup was created.
"},{"location":"users/setup-app/settings/backups/#full-backup-restore","title":"Full Backup & Restore","text":"The Full Backup & Restore section lets you manage SolarNode \"full\" backups. Each full backup contains a snapshot of the settings you have configured, the node's certificate, login credentials, custom plugins, and more.
The Backup Service shows a list of the available Backup Services. Each service has its own settings that must be configured for the service to operate. After changing any of the selected service's settings, click the Save Settings button to save those changes.
The Backup button allows you to create a new backup.
The Backups menu allows you to download or restore any available backup.
The Import button allows you to upload a previously downloaded backup file.
"},{"location":"users/setup-app/settings/backups/#backup-services","title":"Backup Services","text":"SolarNode supports configurable Backup Service plugins to manage the storage of backup resources.
"},{"location":"users/setup-app/settings/backups/#file-system-backup-service","title":"File System Backup Service","text":"The File System Backup Service is the default Backup Service provided by SolarNode. It saves the backup onto the node itself. In order to be able to restore your settings if the node is damaged or lost, you must download a copy of a backup using the Download button, and save the file to a safe place.
Warning
If you do not download a copy of a backup, you run the risk of losing your settings and node certificate, making it impossible to restore the node in the event of a catastrophic hardware failure.
The configurable settings of the File System Backup Service are:
Setting Description Backup Directory The folder (on the node) where the backups will be saved. Copies The number of backup copies to keep, before deleting the oldest backup."},{"location":"users/setup-app/settings/backups/#s3-backup-service","title":"S3 Backup Service","text":"The S3 Backup Service creates cloud-based backups in AWS S3 (or any compatible provider). You must configure the credentials and S3 location details to use before any backups can be created.
Note
The S3 Backup Service requires the S3 Backup Service Plugin.
The configurable settings of the S3 Backup Service are:
Setting Description AWS Token The AWS access token to authenticate with. AWS Secret The AWS access token secret to authenticate with. AWS Region The name of the Amazon region to use, for exampleus-west-2
. S3 Bucket The name of the S3 bucket to use. S3 Path An optional root path to use for all backup data (typically a folder location). Storage Class A supported storage class, such as STANDARD (the default), STANDARD_IA
, INTELLIGENT_TIERING
, REDUCED_REDUNDANCY
, and so on. Copies The number of backup copies to keep, before deleting the oldest backup. Cache Seconds The amount of time to cache backup metadata such as the list of available backups, in seconds."},{"location":"users/setup-app/settings/components/","title":"Components","text":"The Components page lists all the configurable multi-instance components available on your SolarNode. Multi-instance means you can configure any number of a given component, each with their own settings.
For example imagine you want to collect data from a power meter, solar inverter, and weather station, all of which use the Modbus protocol. To do that you would configure three instances of the Modbus Device component, one for each device.
Use the Manage button for any listed compoennt to add or remove instances of that component.
An instance count badge appears next to any component with at least one instance configured.
"},{"location":"users/setup-app/settings/components/#manage-component","title":"Manage Component","text":"The component management page is shown when you click the Manage button for a multi-instance component. Each component instance's settings are independent, allowing you to integrate with multiple copies of a device or service.
For example if you connected a Modbus power meter and a Modbus solar inverter to a node, you would create two Modbus Device component instances, and configure them with settings appropriate for each device.
The component management screen allows you to add, update, and remove component instances.
"},{"location":"users/setup-app/settings/components/#add-new-instance","title":"Add new instance","text":"Add new component instances by clicking the Add new X button in the top-right, where X is the name of the component you are managing. You will be given the opportunity to assign a unique identifier to the new component instance:
When creating a new component instance you can provide a short name to identify it with.
When you add more than one component instance, the identifiers appear as clickable buttons that allow you to switch between the setting forms for each component.
Component instance buttons let you switch between each component instance.
"},{"location":"users/setup-app/settings/components/#saving-changes","title":"Saving changes","text":"Each setting will include a button that will show you a brief description of that setting.
Click for brief setting information.
After making any change, an Active value label will appear, showing the currently active value for that setting.
After making changes to any component instance's settings, click the Save All Changes button in the top-left to commit those changes.
Save All Changes works across all component instances
You can safely switch between and make changes on multiple component instance settings before clicking the Save All Changes button: your changes across all instances will be saved.
"},{"location":"users/setup-app/settings/components/#remove-or-reset-instances","title":"Remove or reset instances","text":"At the bottom of each component instance are buttons that let you delete or reset that component intance.
Buttons to delete or reset component instance.
The Delete button will remove that component instance from appearing, however the settings associated with that instance are preserved. If you re-add an instance with the same identifier then the previous settings will be restored. You can think of the Delete button as disabling the component, giving you the option to \"undo\" the deletion if you like.
The Restore button will reset the component to its factory defaults, removing any settings you have customized on that instance. The instance remains visible and you can re-configure the settings as needed.
"},{"location":"users/setup-app/settings/components/#remove-all-instances","title":"Remove all instances","text":"The Remove all button in the top-right of the page allows you to remove all component instances, including any customized settings on those instances.
Warning
The Remove all action will delete all your customized settings for all the component instances you are managing. When finished it will be as if you never configured this component before.
Remove all instances with the \"Remove all\" button.
You will be asked to confirm removing all instances:
Confirming the \"Remove all\" action.
"},{"location":"users/setup-app/settings/datum-filters/","title":"Datum Filters","text":"Datum Filters are services that manipulate datum generated by SolarNode plugins before they are uploaded to SolarNet. See the general Datum Filters section for more information about how datum filters work and what they are used for.
"},{"location":"users/setup-app/settings/datum-filters/#global-datum-filters","title":"Global Datum Filters","text":"Global Datum Filters are applied to datum just before posting to SolarNetwork. Once an instance is created, it is automatically active and will be applied to datum. This differs from User Datum Filters, which must be explicitly added to a service to be used, either dircectly or indirectly with a Datum Filter Chain.
Click the Manage button next to any Global Datum Filter component to create, update, and remove instances of that filter.
"},{"location":"users/setup-app/settings/datum-filters/#datum-queue","title":"Datum Queue","text":"All datum generated by SolarNode plugins are added to the Datum Queue for processing. The datum are processed in the order they are added to the queue. Datum Filters are applied to each datum, each filter's result passed to the next available filter until all filters have been applied.
The Datum Queue section of the Datum Filters page shows you some processing statistics and has a couple of settings you can change:
Setting Description Delay The minimum amount of time to delay processing datum after they have been added to the queue, in milliseconds. A small amount of delay allows parallel datum collection to get processed more reliably in time-based order. The default is 200 ms and usually does not need to be changed. Datum Filter The Service Name of a Datum Filter component to process datum with. See below for more information.The Datum Filter setting allows you to configure a single Datum Filter to apply to every datum captured in SolarNode. Since you can only configure one filter, it is very common to configure a Datum Filter Chain, where you can then configure any number of other filters to apply.
"},{"location":"users/setup-app/settings/datum-filters/#global-datum-filter-chain","title":"Global Datum Filter Chain","text":"The Global Datum Filter Chain provides a way to apply explicit User Datum Filters to datum just before posting to SolarNetwork.
Setting Description Active Global Filters A read-only list of any created Global Datum Filter component Service Name values. These filters are automatically applied, without needing to explicitly reference them in the Datum Filters list. Available User Filters A read-only list of Service Name values of User Datum Filter components that have been configured. You can copy any value from this list and paste it into the Datum Filters list to activate that filter. Datum Filters The list of Service Name values of User Datum Filter components to apply to datum."},{"location":"users/setup-app/settings/datum-filters/#user-datum-filters","title":"User Datum Filters","text":"User Datum Filters are not applied automatically: they must be explicitly added to a service to be used, either dircectly or indirectly with a Datum Filter Chain. This differs from Global Datum Filters which are automatically applied to datum just before being uploaded to SolarNet.
Click the Manage button next to any User Datum Filter component to create, update, and remove instances of that filter.
"},{"location":"users/setup-app/settings/logging/","title":"Logging","text":"The SolarNode UI supports configuring logger levels dynamically, without having to change the logging configuration file.
Warning
When SolarNode restarts all changes made in the Logger UI will be lost and the logger configuration will revert to whatever is configured in the logging configuration file.
The Logging page lists all the configured logger levels and lets you add new loggers and edit the existing ones using a simple form.
"},{"location":"users/setup-app/settings/op-modes/","title":"Operational Modes","text":"The SolarNode UI will show the list of active Operational Modes on the Settings > Operational Modes page. Click the + button to activate modes, and the button to deactivate an active mode.
The main Settings page also shows a read-only view of the active modes:
"},{"location":"users/setup-app/settings/services/","title":"Services","text":"Configurable services that are not Components appear on the Services page.
Each setting will include a button that will show you a brief description of that setting.
Click for brief setting information.
After making any change, an Active value label will appear, showing the currently active value for that setting.
In order to save your changes, you must click the Save All Changes button at the top of the page.
"},{"location":"users/setup-app/tools/","title":"Tools","text":"TODO
"},{"location":"users/setup-app/tools/command-console/","title":"Command Console","text":"SolarNode includes a Command Console page where troubleshooting commands from supporting plugins are displayed. The page shows a list of available command topics and lets you toggle the inclusion of each topic's commands at the bottom of the page.
"},{"location":"users/setup-app/tools/command-console/#modbus-commands","title":"Modbus Commands","text":"The Modbus TCP Connection and Modbus Serial Connection components support publishing mbpoll commands under a modbus command topic. The mbpoll
utility is included in SolarNodeOS; if not already installed you can install it by logging in to the SolarNodeOS shell and running the following command:
sudo apt install mbpoll\n
Modbus command logging must be enabled on each Modbus Connection component by toggling the CLI Publishing setting on.
Once CLI Publishing has been enabled, every Modbus request made on that connection will generate an equivalent mbpoll
command, and those commands will be shown on the Command Console.
You can copy any logged command and paste that into a SolarNodeOS shell to execute the Modbus request and see the results.
mbpoll -g -0 -1 -m rtu -b 4800 -s 1 -P none -a 1 -o 5 -t 4:hex -r 0 -c 2 /dev/tty.usbserial-FTYS9FWO\n-- Polling slave 1...\n[0000]: 0x00FC\n[0001]: 0x0A1F\n
"},{"location":"users/setup-app/tools/controls/","title":"Controls","text":"TODO
"},{"location":"users/sysadmin/","title":"System Administration","text":"SolarNode runs on SolarNodeOS, a Debian Linux-based operating system. If you are already familiar with Debian Linux, or one of the other Linux distributions built from Debian like Ubuntu Linux, you will find it pretty easy to get around in SolarNodeOS.
"},{"location":"users/sysadmin/#system-user-account","title":"System User Account","text":"SolarNodeOS ships with a solar
user account that you can use to log into the operating system. The default password is solar
but may have been changed by a system administrator.
Warning
The solar
user account is not related to the account you log into the SolarNode Setup App with.
To change the system user account's password, use the passwd
command.
$ passwd\nChanging password for solar.\nCurrent password:\nNew password:\nRetype new password:\npasswd: password updated successfully\n
Tip
Changing the solar
user's password is highly recommended when you first deploy a node.
Some commands require administrative permission. The solar
user can execute arbitrary commands with administrative permission by prefixing the command with sudo
. For example the reboot
command will reboot SolarNodeOS, but requires administrative permission.
$ sudo reboot\n
The sudo
command will prompt you for the solar
user's password and then execute the given command as the administrator user root
.
The solar
user can also become the root
administrator user by way of the su
command:
$ sudo su -\n
Once you have become the root
user you no longer need to use the sudo
command, as you already have administrative permissions.
SolarNodeOS comes with an SSH service active, which allows you to remotely connect and access the command line, using any SSH client.
"},{"location":"users/sysadmin/date-time/","title":"Date and Time","text":"SolarNodeOS includes date and time management functions through the timedatectl command. Run timedatectl status
to view information about the current date and time settings.
$ timedatectl status\n Local time: Fri 2023-05-26 03:41:42 BST\n Universal time: Fri 2023-05-26 02:41:42 UTC\n RTC time: n/a\n Time zone: Europe/London (BST, +0100)\nSystem clock synchronized: yes\n NTP service: active\n RTC in local TZ: no\n
"},{"location":"users/sysadmin/date-time/#changing-the-local-time-zone","title":"Changing the local time zone","text":"SolarNodeOS uses the UTC
time zone by default. If you would like to change this, use the timedatectl set-timezone
$ sudo timedatectl set-timezone Pacific/Auckland\n
You can list the available time zone names by running timedatectl list-timezones
.
SolarNodeOS uses the systemd-timesyncd service to synchronize the node's clock with internet time servers. Normally no configuration is necessary. You can check the status of the network time synchronization with timedatectl like:
$ timedatectl status\n Local time: Fri 2023-05-26 03:41:42 BST\n Universal time: Fri 2023-05-26 02:41:42 UTC\n RTC time: n/a\n Time zone: Europe/London (BST, +0100)\nSystem clock synchronized: yes\n NTP service: active\n RTC in local TZ: no\n
Warning
For internet time synchronization to work, SolarNode needs to access Network Time Protocol (NTP) servers, using UDP over port 123.
"},{"location":"users/sysadmin/date-time/#network-time-server-configuration","title":"Network time server configuration","text":"The NTP servers that SolarNodeOS uses are configured in the /etc/systemd/timesyncd.conf file. The default configuration uses a pool of Debian servers, which should be suitable for most nodes. If you would like to change the configuration, edit the timesyncd.conf
file and change the NTP=
line, for example
[Time]\nNTP=my.ntp.example.com\n
"},{"location":"users/sysadmin/date-time/#setting-the-date-and-time","title":"Setting the date and time","text":"In order to manually set the date and time, NTP time synchronization must be disabled with timedatectl set-ntp false
. Then you can run timedatectl set-time
to set the date:
$ sudo timedatectl set-ntp false\n$ sudo timedatectl set-time \"2023-05-26 17:30:00\"\n
If you then look at the timedatectl status
you will see that NTP has been disabled:
$ timedatectl\n Local time: Fri 2023-05-26 17:30:30 NZST\n Universal time: Fri 2023-05-26 05:30:30 UTC\n RTC time: n/a\n Time zone: Pacific/Auckland (NZST, +1200)\nSystem clock synchronized: no # (1)!\nNTP service: inactive # (2)!\nRTC in local TZ: no\n
You can re-enable NTP time synchronization like this:
Enabling NTP time synchronization$ sudo timedatectl set-ntp true\n
"},{"location":"users/sysadmin/networking/","title":"Networking","text":"SolarNodeOS uses the systemd-networkd service to manage network devices and their settings. A network device relates to a physical network hardware device or a software networking component, as recognized and named by the operating system. For example, the first available ethernet device is typically named eth0
and the first available WiFi device wlan0
.
Network configuration is stored in .network
files in the /etc/systemd/network
directory. SolarNodeOS comes with default support for ethernet and WiFi network devices.
The default 10-eth.network
file configures the default ethernet network eth0
to use DHCP to automatically obtain a network address, routing information, and DNS servers to use.
SolarNodeOS networks are configured to use DHCP by default. If you need to re-configure a network to use DHCP, change the configuration to look like this:
Ethernet network with DHCP configuration[Match]\nName=eth0\n\n[Network]\nDHCP=yes\n
Use a Name value specific to your network.
"},{"location":"users/sysadmin/networking/#static-network-configuration","title":"Static network configuration","text":"If you need to use a static network address, instead of DHCP, edit the network configuration file (for example, the 10-eth.network
file for the ethernet network), and change it to look like this:
[Match]\nName=eth0\n\n[Network]\nDNS=1.1.1.1\n\n[Address]\nAddress=192.168.3.10/24\n\n[Route]\nGateway=192.168.3.1\n
Use Name, DNS, Address, and Gateway values specific to your network. The same static configuration for a single address can also be specified in a slightly more condensed form, moving everything into the [Network]
section:
[Match]\nName=eth0\n\n[Network]\nAddress=192.168.3.10/24\nGateway=192.168.3.1\nDNS=1.1.1.1\n
"},{"location":"users/sysadmin/networking/#wifi-network-configuration","title":"WiFi network configuration","text":"The default 20-wlan.network
file configures the default WiFi network wlan0
to use DHCP to automatically obtain a network address, routing information, and DNS servers to use. To configure the WiFi network SolarNode should connect to, run this command:
sudo dpkg-reconfigure sn-wifi\n
You will then be prompted to supply the following WiFi settings:
NZ
Note about WiFi support
WiFi support is provided by the sn-wifi
package, which may not be installed. See the Package Maintenance section for information about installing packages.
For initial setup of a the WiFi settings on a SolarNode it can be helpful for SolarNode to create its own WiFi network, as an access point. The sn-wifi-autoap@wlan0
service can be used for this. When enabled, it will monitor the WiFi network status, and when the WiFi connection fails for any reason it will enable a SolarNode
WiFi network using a gateway IP address of 192.168.16.1
. Thus when the SolarNode access point is enabled, you can connect to that network from your own device and reach the Setup App at http://192.168.16.1/
or the command line via ssh solar@192.168.16.1
.
The default 21-wlan-ap.network
file configures the default WiFi network wlan0
to act as an Access Point
This service is not enabled by default. To enable it, run the following:
sudo systemctl enable --now sn-wifi-autoap@wlan0\n
Once enabled, if SolarNode cannot connect to the configured WiFi network, it will create its own SolarNode
network. By default the password for this network is solarnode
. The Access Point network configuration is defined in the /etc/network/wpa_supplicant-wlan0.conf
file, in a section like this:
### access-point mode\nnetwork={\n ssid=\"SolarNode\"\n mode=2\n key_mgmt=WPA-PSK\n psk=\"solarnode\"\n frequency=2462\n}\n
"},{"location":"users/sysadmin/networking/#firewall","title":"Firewall","text":"SolarNodeOS uses the nftables system to provide an IP firewall to SolarNode. By default only the following incoming TCP ports are allowed:
Port Description 22 SSH access 80 HTTP SolarNode UI 8080 HTTP SolarNode UI alternate port"},{"location":"users/sysadmin/networking/#open-additional-ip-ports","title":"Open additional IP ports","text":"You can edit the /etc/nftables.conf
file to add additional open IP ports as needed. A good place to insert new rules is after the lines that open ports 80 and 8080:
# Allows HTTP\nadd rule ip filter INPUT tcp dport 80 counter accept\nadd rule ip filter INPUT tcp dport 8080 counter accept\n
For example, if you would like to open UDP port 50222 to support the Weatherflow Tempest weather station, add the following after the above lines:
# Allow WeatherFlow Tempest messages\nadd rule ip filter INPUT udp dport 50222 counter accept\n
"},{"location":"users/sysadmin/networking/#reload-firewall-rules","title":"Reload firewall rules","text":"If you make changes to the firewall rules in /etc/nftables.conf
, run the following command to reload the firewall configuration:
sudo systemctl reload nftables\n
"},{"location":"users/sysadmin/packages/","title":"Package Maintenance","text":"SolarNodeOS supports a wide variety of software packages. You can install new packages as well as apply package updates as they become available. The apt command performs these tasks.
"},{"location":"users/sysadmin/packages/#update-package-metadata","title":"Update package metadata","text":"For SolarNodeOS to know what packages, or package updates, are available, you need to periodically update the available package information. This is done with the apt update
command:
sudo apt update # (1)!\n
sudo
command runs other commands with administrative privledges. It will prompt you for your user account password (typically the solar
user).Use the apt list
command to list the installed packages:
apt list --installed\n
"},{"location":"users/sysadmin/packages/#update-packages","title":"Update packages","text":"To see if there are any package updates available, run apt list
like this:
apt list --upgradable\n
If there are updates available, that will show them. You can apply all package updates with the apt upgrade
command, like this:
sudo apt upgrade\n
If you want to install an update for a specific package, use the apt install
command instead.
Tip
The apt upgrade
command will update existing packages and install packages that are required by those packages, but it will never remove an existing package. Sometimes you will want to allow packages to be removed during the upgrade process; to do that use the apt full-upgrade
command.
Use the apt search
command to search for packages. By default this will match package names and their descriptions. You can search just for package names by including a --names-only
argument.
# search for \"name\" across package names and descriptions\napt search name\n\n# search for \"name\" across package names only\napt search --names-only name\n\n# multiple search terms are logically \"and\"-ed together\napt search name1 name2\n
"},{"location":"users/sysadmin/packages/#install-packages","title":"Install packages","text":"The apt install
command will install an available package, or an individual package update.
sudo apt install mypackage\n
"},{"location":"users/sysadmin/packages/#remove-packages","title":"Remove packages","text":"You can remove packages with the apt remove
command. That command will preserve any system configuration associated with the package(s); if you would like to also remove that you can use the apt purge
command.
sudo apt remove mypackage\n\n# use `purge` to also remove configuration\nsudo apt purge mypackage\n
"},{"location":"users/sysadmin/solarnode-service/","title":"SolarNode Service","text":"SolarNode is managed as a systemd service. There are some shortcut commands to more easily manage the service.
Command Descriptionsn-start
Start the SolarNode service. sn-restart
Restart the SolarNode service. sn-status
View status information about the SolarNode service (see if it is running or not). sn-stop
Stop the SolarNode service. The sn-stop
command requires administrative permissions, so you may be prompted for your system account password (usually the solar
user's password).
You can modify the environment variables passed to the SolarNode service, as well as modify the Java runtime options used. You may want to do this, for example, to turn on Java remote debugging support or to give the SolarNode process more memory.
The systemd solarnode.service
unit will load the /etc/solarnode/env.conf
environment configuration file if it is present. You can define arbitrary environment variables using a simple key=value
syntax.
SolarNodeOS ships with a /etc/solarnode/env.conf.example
file you can use for reference.
The sn-solarssh
package in SolarNodeOS provides a solarssh
command-line tool for managing SolarSSH connections.
To view the node's public SSH key, you can execute solarssh showkey
.
$ solarssh showkey\nssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG7DWIuC2MVHy/gfD32sCayoVFpGVbZ8VXuQubmKjwyx SolarNode\n
"},{"location":"users/sysadmin/solarssh/#list-solarssh-sessions","title":"List SolarSSH sessions","text":"Run solarssh list
to view all available SolarSSH sessions.
$ solarssh list\nb0ae36e0-06ae-4d3d-b34e-9bf2ca8049f1,ssh.solarnetwork.net,8022,43340\n
"},{"location":"users/sysadmin/solarssh/#view-solarssh-session-status","title":"View SolarSSH session status","text":"Using the output of solarssh list
you can view the SSH connection status of a specific SSH session with solarssh status
, like this:
$ solarssh -c b0ae36e0-06ae-4d3d-b34e-9bf2ca8049f1,ssh.solarnetwork.net,8022,43340 status\nactive\n
"},{"location":"users/sysadmin/solarssh/#stop-solarssh-session","title":"Stop SolarSSH session","text":"You can force a SolarSSH session to end using solarssh stop
, like this:
$ solarssh -c b0ae36e0-06ae-4d3d-b34e-9bf2ca8049f1,ssh.solarnetwork.net,8022,43340 stop\n
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"SolarNode Handbook","text":"This handbook provides guides and reference documentation about SolarNode, the distributed computing part of SolarNetwork.
SolarNode is the swiss army knife for IoT monitoring and control. It is deployed on inexpensive computers in homes, buildings, vehicles, and even EV chargers, connected to any number of sensors, meters, building automation systems, and more. There are several SolarNode icons in the image below. Can you spot them all?
"},{"location":"#user-guide","title":"User Guide","text":"For information on getting and using SolarNode, see the User Guide.
"},{"location":"#developer-guide","title":"Developer Guide","text":"For information on working on the SolarNode codebase, such as writing a plugin, see the Developer Guide.
"},{"location":"developers/","title":"Developer Guide","text":"This section of the handbook is geared towards developers working with the SolarNode codebase to develop a plugin.
"},{"location":"developers/#solarnode-source","title":"SolarNode source","text":"The core SolarNode platform code is available on GitHub.
"},{"location":"developers/#getting-started","title":"Getting started","text":"See the SolarNode Development Guide to set up your own development environment for writing SolarNode plugins.
"},{"location":"developers/#solarnode-debugging","title":"SolarNode debugging","text":"You can enable Java remote debugging for SolarNode on a node device for SolarNode plugin development or troubleshooting by modifying the SolarNode service environment. Once enabled, you can use SSH port forwarding to enable Java remote debugging in your Java IDE of choice.
To enable Java remote debugging, copy the /etc/solarnode/env.conf.example
file to /etc/solarnode/env.conf
. The example already includes this support, using port 9142
for the debugging port. Then restart the solarnode
service:
$ cp /etc/solarnode/env.conf.example /etc/solarnode/env.conf\n$ sn-restart\n
Then you can use ssh
from your development machine to forward a local port to the node's 9142
port, and then have your favorite IDE establish a remote debugging connection on your local port.
For example, on a Linux or macOS machine you could forward port 8000
to a node's port 9142
like this:
$ ssh -L8000:localhost:9142 solar@solarnode\n
Once that ssh
connection is established, your IDE can be used to connect to localhost:8000
for a remote Java debugging session.
The SolarNode platform has been designed to be highly modular and dynamic, by using a plugin-based architecture. The plugin system SolarNode uses is based on the OSGi specification, where plugins are implemented as OSGi bundles. SolarNode can be thought of as a collection of OSGi bundles that, when combined and deployed together in an OSGi framework like Eclipse Equinox, form the complete SolarNode platform.
To summarize: everything in SolarNode is a plugin!
OSGi bundles and Eclipse plug-ins
Each OSGi bundle in SolarNode comes configured as an Eclipse IDE (or simply Eclipse) plug-in project. Eclipse refers to OSGi bundles as \"plug-ins\" and its OSGi development tools are collectively known as the Plug-in Development Environment, or PDE for short. We use the terms bundle and plug-in and plugin somewhat interchangably in the SolarNode project. Although Eclipse is not actually required for SolarNode development, it is very convenient.
Practically speaking a plugin, which is an OSGi bundle, is simply a Java JAR file that includes the Java code implementing your plugin and some OSGi metadata in its Manifest. For example, here is the contents of the net.solarnetwork.common.jdt
plugin JAR:
META-INF/MANIFEST.MF\nnet/solarnetwork/common/jdt/Activator.class\nnet/solarnetwork/common/jdt/ClassLoaderNameEnvironment.class\nnet/solarnetwork/common/jdt/CollectingCompilerRequestor.class\nnet/solarnetwork/common/jdt/CompilerUtils.class\nnet/solarnetwork/common/jdt/JdtJavaCompiler.class\nnet/solarnetwork/common/jdt/MapClassLoader.class\nnet/solarnetwork/common/jdt/ResourceCompilationUnit.class\n
"},{"location":"developers/osgi/#services","title":"Services","text":"Central to the plugin architecture SolarNode uses is the concept of a service. In SolarNode a service is defined by a Java interface. A plugin can advertise a service to the SolarNode runtime. Plugins can lookup a service in the SolarNode runtime and then invoke the methods defined on it.
The advertising/lookup framework SolarNode uses is provided by OSGi. OSGi provides several ways to manage services. In SolarNode the most common is to use Blueprint XML documents to both publish services (advertise) and acquire references to services (lookup).
"},{"location":"developers/osgi/blueprint-compendium/","title":"Gemini Blueprint Compendium","text":"The Gemini Blueprint implementation provides some useful extensions that SolarNode makes frequent use of. To use the extensions you need to declare the Gemini Blueprint Compendium namespace in your Blueprint XML file, like this:
Gemini Blueprint Compendium XML declaration<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\nxmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\nxmlns:osgix=\"http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium\"\nxmlns:beans=\"http://www.springframework.org/schema/beans\"\nxsi:schemaLocation=\"\n http://www.osgi.org/xmlns/blueprint/v1.0.0\n http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\n http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium\n http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium/gemini-blueprint-compendium.xsd\n http://www.springframework.org/schema/beans\n http://www.springframework.org/schema/beans/spring-beans.xsd\">\n
This example declares the Gemini Blueprint Compendium XML namespace prefix osgix
and a related Spring Beans namespace prefix beans
. You will see those used throughout SolarNode.
Managed Properties provide a way to use the Configuration Admin service to manage user-configurable service properties. Conceptually it is like linking a class to a set of dynamic runtime Settings: Configuration Admin provides change event and persistence APIs for the settings, and the Managed Properties applies those settings to the linked service.
Imagine you have a service class MyService
with a configurable property level
. We can make that property a managed, persistable setting by adding a <osgix:managed-properties>
element to our Blueprint XML, like this:
package com.example;\n\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.Map;\nimport net.solarnetwork.node.service.support.BaseIdentifiable;\nimport net.solarnetwork.settings.SettingSpecifier;\nimport net.solarnetwork.settings.SettingSpecifierProvider;\nimport net.solarnetwork.settings.SettingsChangeObserver;\nimport net.solarnetwork.settings.support.BasicTextFieldSettingSpecifier;\n\n/**\n * My super-duper service.\n *\n * @author matt\n * @version 1.0\n */\npublic class MyService extends BaseIdentifiable\nimplements SettingsChangeObserver, SettingSpecifierProvider {\n\nprivate int level;\n\n@Override\npublic String getSettingUid() {\nreturn \"com.example.MyService\"; // (1)!\n}\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nreturn Collections.singletonList(\nnew BasicTextFieldSettingSpecifier(\"level\", String.valueOf(0)));\n}\n\n@Override\npublic void configurationChanged(Map<String, Object> properties) {\n// the settings have changed; do something\n}\n\npublic int getLevel() {\nreturn level;\n}\n\npublic void setLevel(int level) {\nthis.level = level;\n}\n\n}\n
title = Super-duper Service\ndesc = This service does it all.\n\nlevel.key = Level\nlevel.desc = This one goes to 11.\n
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\nxmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\nxmlns:osgix=\"http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium\"\nxmlns:beans=\"http://www.springframework.org/schema/beans\"\nxsi:schemaLocation=\"\n http://www.osgi.org/xmlns/blueprint/v1.0.0\n http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\n http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium\n http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium/gemini-blueprint-compendium.xsd\n http://www.springframework.org/schema/beans\n http://www.springframework.org/schema/beans/spring-beans.xsd\">\n\n<service>\n<interfaces>\n<value>net.solarnetwork.settings.SettingSpecifierProvider</value>\n</interfaces>\n<bean class=\"com.example.MyService\"><!-- (1)! -->\n<osgix:managed-properties\npersistent-id=\"com.example.MyService\"\nautowire-on-update=\"true\"\nupdate-method=\"configurationChanged\"/>\n<property name=\"messageSource\">\n<bean class=\"org.springframework.context.support.ResourceBundleMessageSource\">\n<property name=\"basenames\" value=\"com.example.MyService\"/>\n</bean>\n</property>\n</bean>\n</service>\n\n</blueprint>\n
<osgi:managed-properties>
element within the actual service <bean>
element you want to apply the managed settings on.persistent-id
attribute value matches the getSettingsUid()
value in MyService.java
autowire-on-update
method toggles having the Managed Properties automatically applied by Gemini Blueprint; you can set to false
and provide an update-method
if you want to handle changes yourselfupdate-method
attribute is optional; it provides a way for the service to be notified after the Configuration Admin settings have been applied.When this plugin is deployed in SolarNode, the component will appear on the main Settings page and offer a configurable Level setting, like this:
"},{"location":"developers/osgi/blueprint-compendium/#managed-service-factory","title":"Managed Service Factory","text":"The Managed Service Factory service provide a way to use the Configuration Admin service to manage multiple copies of a user-configurable service's properties. Conceptually it is like linking a class to a set of dynamic runtime Settings, but you can create as many independent copies as you like. Configuration Admin provides change event and persistence APIs for the settings, and the Managed Service Factory applies those settings to each linked service instance.
Imagine you have a service class ManagedService
with a configurable property level
. We can make that property a factory of managed, persistable settings by adding a <osgix:managed-service-factory>
element to our Blueprint XML, like this:
package com.example;\n\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.Map;\nimport net.solarnetwork.node.service.support.BaseIdentifiable;\nimport net.solarnetwork.settings.SettingSpecifier;\nimport net.solarnetwork.settings.SettingSpecifierProvider;\nimport net.solarnetwork.settings.SettingsChangeObserver;\nimport net.solarnetwork.settings.support.BasicTextFieldSettingSpecifier;\n\n/**\n * My super-duper managed service.\n *\n * @author matt\n * @version 1.0\n */\npublic class ManagedService extends BaseIdentifiable\nimplements SettingsChangeObserver, SettingSpecifierProvider {\n\nprivate int level;\n\n@Override\npublic String getSettingUid() {\nreturn \"com.example.ManagedService\"; // (1)!\n}\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nreturn Collections.singletonList(\nnew BasicTextFieldSettingSpecifier(\"level\", String.valueOf(0)));\n}\n\n@Override\npublic void configurationChanged(Map<String, Object> properties) {\n// the settings have changed; do something\n}\n\npublic int getLevel() {\nreturn level;\n}\n\npublic void setLevel(int level) {\nthis.level = level;\n}\n\n}\n
title = Super-duper Managed Service\ndesc = This managed service does it all.\n\nlevel.key = Level\nlevel.desc = This one goes to 11.\n
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\nxmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\nxmlns:osgix=\"http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium\"\nxmlns:beans=\"http://www.springframework.org/schema/beans\"\nxsi:schemaLocation=\"\n http://www.osgi.org/xmlns/blueprint/v1.0.0\n http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\n http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium\n http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium/gemini-blueprint-compendium.xsd\n http://www.springframework.org/schema/beans\n http://www.springframework.org/schema/beans/spring-beans.xsd\">\n\n<bean id=\"messageSource\" class=\"org.springframework.context.support.ResourceBundleMessageSource\">\n<property name=\"basenames\" value=\"com.example.ManagedService\"/>\n</bean>\n\n<service interface=\"net.solarnetwork.settings.SettingSpecifierProviderFactory\"><!-- (1)! -->\n<bean class=\"net.solarnetwork.settings.support.BasicSettingSpecifierProviderFactory\">\n<property name=\"displayName\" value=\"Super-duper Managed Service\"/>\n<property name=\"factoryUid\" value=\"com.example.ManagedService\"/><!-- (2)! -->\n<property name=\"messageSource\" ref=\"messageSource\"/>\n</bean>\n</service>\n\n<osgix:managed-service-factory\nfactory-pid=\"com.example.ManagedService\"\nautowire-on-update=\"true\"\nupdate-method=\"configurationChanged\"><!-- (3)! -->\n<osgix:interfaces>\n<beans:value>net.solarnetwork.settings.SettingSpecifierProvider</beans:value>\n</osgix:interfaces>\n<osgix:service-properties>\n<beans:entry key=\"settingPid\" value=\"com.example.ManagedService\"/>\n</osgix:service-properties>\n<bean class=\"com.example.ManagedService\">\n<property name=\"messageSource\" ref=\"messageSource\"/>\n</bean>\n</osgix:managed-service-factory>\n\n</blueprint>\n
SettingSpecifierProviderFactory
service is what makes the managed service factory appear as a component in the SolarNode Settings UI.factoryUid
defines the Configuration Admin factory PID and the Settings UID.<osgix:managed-service-factory>
element in your Blueprint XML, with a nested <bean>
\"template\" within it. The template bean will be instantiated for each service instance instantiated by the Managed Service Factory.factory-pid
attribute value matches the getSettingsUid()
value in ManagedService.java
and the factoryUid
declared in #2.autowire-on-update
method toggles having the Managed Properties automatically applied by Gemini Blueprint; you can set to false
and provide an update-method
if you want to handle changes yourselfupdate-method
attribute is optional; it provides a way for the service to be notified after the Configuration Admin settings have been applied.When this plugin is deployed in SolarNode, the managed component will appear on the main Settings page like this:
After clicking on the Manage button next to this component, the Settings UI allows you to create any number of instances of the component, each with their own setting values. Here is a screen shot showing two instances having been created:
"},{"location":"developers/osgi/blueprint/","title":"Blueprint","text":"SolarNode supports the OSGi Blueprint Container Specification so plugins can declare their service dependencies and register their services by way of an XML file deployed with the plugin. If you are familiar with the Spring Framework's XML configuration, you will find Blueprint very similar. SolarNode uses the Eclipse Gemini implementation of the Blueprint specification, which is directly derived from Spring Framework.
Note
This guide will not document the full Blueprint XML syntax. Rather, it will attempt to showcase the most common parts used in SolarNode. Refer to the Blueprint Container Specification for full details of the specification.
"},{"location":"developers/osgi/blueprint/#example","title":"Example","text":"Imagine you are working on a plugin and have a com.example.Greeter
interface you would like to register as a service for other plugins to use, and an implementation of that service in com.example.HelloGreeter
that relies on the Placeholder Service provided by SolarNode:
package com.example;\npublic interface Greeter {\n\n/**\n * Greet something with a given name.\n * @param name the name to greet\n * @return the greeting\n */\nString greet(String name);\n\n}\n
package com.example;\nimport net.solarnetwork.node.service.PlaceholderService;\npublic class HelloGreeter implements Greeter {\n\nprivate final PlaceholderService placeholderService;\n\npublic HelloGreeter(PlaceholderService placeholderService) {\nsuper();\nthis.placeholderService = placeholderService;\n}\n\n@Override\npublic String greet(String name) {\nreturn placeholderService.resolvePlaceholders(\nString.format(\"Hello %s, from {myName}.\", name),\nnull);\n}\n}\n
Assuming the PlaceholderService
will resolve {name}
to Office Node
, we would expect the greet()
method to run like this:
Greeter greeter = resolveGreeterService();\nString result = greeter.greet(\"Joe\");\n// result is \"Hello Joe, from Office Node.\"\n
In the plugin we then need to:
net.solarnetwork.node.service.PlaceholderService
to pass to the HelloGreeter(PlaceholderService)
constructorHelloGreeter
comopnent as a com.example.Greeter
service in the SolarNode platformHere is an example Blueprint XML document that does both:
Blueprint XML example<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\nxmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\nxsi:schemaLocation=\"\n http://www.osgi.org/xmlns/blueprint/v1.0.0\n https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\">\n\n<!-- Declare a reference (lookup) to the PlaceholderService -->\n<reference id=\"placeholderService\"\ninterface=\"net.solarnetwork.node.service.PlaceholderService\"/>\n\n<service interface=\"com.example.Greeter\">\n<bean class=\"com.example.HelloGreeter\">\n<argument ref=\"placeholderService\">\n</bean>\n</service>\n\n</blueprint>\n
"},{"location":"developers/osgi/blueprint/#blueprint-xml-resources","title":"Blueprint XML Resources","text":"Blueprint XML documents are added to a plugin's OSGI-INF/blueprint
classpath location. A plugin can provide any number of Blueprint XML documents there, but often a single file is sufficient and a common convention in SolarNode is to name it module.xml
.
A minimal Blueprint XML file is structured like this:
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\nxmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\nxsi:schemaLocation=\"\n http://www.osgi.org/xmlns/blueprint/v1.0.0\n https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\">\n\n<!-- Plugin components configured here -->\n\n</blueprint>\n
"},{"location":"developers/osgi/blueprint/#service-references","title":"Service References","text":"To make use of services registered by SolarNode plugins, you declare a reference to that service so you may refer to it elsewhere within the Blueprint XML. For example, imagine you wanted to use the Placeholder Service in your component. You would obtain a reference to that like this:
<reference id=\"placeholderService\"\ninterface=\"net.solarnetwork.node.service.PlaceholderService\"/>\n
The id
attribute allows you to refer to this service elsewhere in your Blueprint XML, while interface
declares the fully-qualified Java interface of the service you want to use.
Components in Blueprint are Java classes you would like instantiated when your plugin starts. They are declared using a <bean>
element in Blueprint XML. You can assign each component a unique identifier using an id
attribute, and then you can refer to that component in other components.
Imagine an example component class com.example.MyComponent
:
package com.example;\n\nimport net.solarnetwork.node.service.PlaceholderService;\n\npublic class MyComponent {\n\nprivate final PlaceholderService placeholderService;\nprivate int minimum;\n\npublic MyComponent(PlaceholderService placeholderService) {\nsuper();\nthis.placeholderService = placeholderService;\n}\n\npublic String go() {\nreturn PlaceholderService.resolvePlaceholders(placeholderService,\n\"{building}/temp\", null);\n}\n\npublic int getMinimum() {\nreturn minimum;\n}\n\npublic void setMinimum(int minimum) {\nthis.minimum = minimum;\n}\n}\n
Here is how that component could be declared in Blueprint:
<reference id=\"placeholderService\"\ninterface=\"net.solarnetwork.node.service.PlaceholderService\"/>\n\n<bean id=\"myComponent\" class=\"com.example.MyComponent\">\n<argument ref=\"placeholderService\">\n<property name=\"minimum\" value=\"10\"/>\n</bean>\n
"},{"location":"developers/osgi/blueprint/#constructor-arguments","title":"Constructor Arguments","text":"If your component requires any constructor arguments, they can be specified with nested <argument>
elements in Blueprint. The <argument>
value can be specified as a reference to another component using a ref
attribute whose value is the id
of that component, or as a literal value using a value
attribute.
For example:
<bean id=\"myComponent\" class=\"com.example.MyComponent\">\n<argument ref=\"placeholderService\">\n<argument value=\"10\">\n
"},{"location":"developers/osgi/blueprint/#property-accessors","title":"Property Accessors","text":"You can configure mutable class properties on a component with nested <property name=\"\">
elements in Blueprint. A mutable property is a Java setter method. For example an int
property minimum
would be associated with a Java setter method public void setMinimum(int value)
.
The <property>
value can be specified as a reference to another component using a ref
attribute whose value is the id
of that component, or as a literal value using a value
attribute.
For example:
<bean id=\"myComponent\" class=\"com.example.MyComponent\">\n<property name=\"placeholderService\" ref=\"placeholderService\">\n<argument name=\"minimum\" value=\"10\">\n
"},{"location":"developers/osgi/blueprint/#startstop-hooks","title":"Start/Stop Hooks","text":"Blueprint can invoke a method on your component when it has finished instantiating and configuring the object (when the plugin starts), and another when it destroys the instance (when the plugin is stopped). You simply provide the name of the method you would like Blueprint to call in the init-method
and destroy-method
attributes of the <bean>
element. For example:
<bean id=\"myComponent\" class=\"com.example.MyComponent\"\ninit-method=\"startup\"\ndestroy-method=\"shutdown\">\n
"},{"location":"developers/osgi/blueprint/#service-registration","title":"Service Registration","text":"You can make any component available to other plugins by registering the component with a <service>
element that declares what interface(s) your component provides. Once registered, other plugins can make use of your component, for example by declaring a <referenece>
to your component class in their Blueprint XML.
Note
You can only register Java interfaces as services, not classes.
For example, imagine a com.example.Startable
interface like this:
package com.example;\npublic interface Startable {\n/**\n * Start!\n * @return the result\n */\nString go();\n}\n
We could implement that interface in the MyComponent
class, like this:
package com.example;\n\npublic class MyComponent implements Startable {\n\n@Override\npublic String go() {\nreturn \"Gone!\";\n}\n}\n
We can register MyComponent
as a Startable
service using a <service>
element like this in Blueprint:
<service interface=\"com.example.Startable\">\n<!-- The service implementation is nested directly within -->\n<bean class=\"com.example.MyComponent\"/>\n</service>\n
<!-- The service implementation is referenced indirectly... -->\n<service ref=\"myComponent\" interface=\"com.example.Startable\"/>\n\n<!-- ... to a bean with a matching id attribute -->\n<bean id=\"myComponent\" class=\"com.example.MyComponent\"/>\n
"},{"location":"developers/osgi/blueprint/#multiple-service-interfaces","title":"Multiple Service Interfaces","text":"You can advertise any number of service interfaces that your component supports, by nesting an <interfaces>
element within the <service>
element, in place of the interface
attribute. For example:
<service ref=\"myComponent\">\n<interfaces>\n<value>com.example.Startable</value>\n<value>com.example.Stopable</value>\n</interfaces>\n</service>\n
"},{"location":"developers/osgi/blueprint/#export-service-packages","title":"Export service packages","text":"For a registered service to be of any use to another plugin, the package the service is defined in must be exported by the plugin hosting that package. That is because the plugin wishing to add a reference to the service will need to import the package in order to use it.
For example, the plugin that hosts the com.example.service.MyService
service would need a manifest file that includes an Export-Package
attribute similar to:
Export-Package: com.example.service;version=\"1.0.0\"\n
"},{"location":"developers/osgi/configuration-admin/","title":"Configuration Admin","text":"TODO
"},{"location":"developers/osgi/life-cycle/","title":"Life cycle","text":"Plugins in SolarNode can be added to and removed from the platform at any time without restarting the SolarNode process, because of the Life Cycle process OSGi manages. The life cycle of a plugin consists of a set of states and OSGi will transition a plugin's state over the course of the plugin's life.
The available plugin states are:
State DescriptionINSTALLED
The plugin has been successfully added to the OSGi framework. RESOLVED
All package dependencies that the bundle needs are available. This state indicates that the plugin is either ready to be started or has stopped. STARTING
The plugin is being started by the OSGi framework, but it has not finished starting yet. ACTIVE
The plugin has been successfully started and is running. STOPPING
The plugin is being stopped by the OSGi framework, but it has not finished stopping yet. UNINSTALLED
The plugin has been removed by the OSGi framework. It cannot change to another state. The possible changes in state can be visualized in the following state-change diagram:
Faisal.akeel, Public domain, via Wikimedia Common
"},{"location":"developers/osgi/life-cycle/#activator","title":"Activator","text":"A plugin can opt in to receiving callbacks for the start/stop state transitions by providing an org.osgi.framework.BundleActivator
implementation and declaring that class in the Bundle-Activator
manifest attribute. This can be useful when a plugin needs to initialize some resources when the plugin is started, and then release those resources when the plugin is stopped.
public interface BundleActivator {\n/**\n * Called when this bundle is started so the Framework can perform the\n * bundle-specific activities necessary to start this bundle.\n *\n * @param context The execution context of the bundle being started.\n */\npublic void start(BundleContext context) throws Exception;\n\n/**\n * Called when this bundle is stopped so the Framework can perform the\n * bundle-specific activities necessary to stop the bundle.\n *\n * @param context The execution context of the bundle being stopped.\n */\npublic void stop(BundleContext context) throws Exception;\n}\n
package com.example.activator;\nimport org.osgi.framework.BundleActivator;\nimport org.osgi.framework.BundleContext;\npublic class Activator implements BundleActivator {\n\n@Override\npublic void start(BundleContext bundleContext) throws Exception {\n// initialize resources here\n}\n\n@Override\npublic void stop(BundleContext bundleContext) throws Exception {\n// clean up resources here\n}\n}\n
Manifest-Version: 1.0\nBundle-ManifestVersion: 2\nBundle-Name: Example Activator\nBundle-SymbolicName: com.example.activator\nBundle-Version: 1.0.0\nBundle-Activator: com.example.activator.Activator\nImport-Package: org.osgi.framework;version=\"[1.3,2.0)\"\n
Tip
Often making use of the component life cycle hooks available in Blueprint are sufficient and no BundleActivator
is necessary.
As SolarNode plugins are OSGi bundles, which are Java JAR files, every plugin automatically includes a META-INF/MANIFEST.MF
file as defined in the Java JAR File Specification. The MANIFEST.MF
file is where OSGi metadata is included, turning the JAR into an OSGi bundle (plugin).
Here is an example snippet from the SolarNode net.solarnetwork.common.jdt plugin:
Example plugin MANIFEST.MFManifest-Version: 1.0\nBundle-ManifestVersion: 2\nBundle-Name: Java Compiler Service (JDT)\nBundle-SymbolicName: net.solarnetwork.common.jdt\nBundle-Description: Java complier using Eclipse JDT.\nBundle-Version: 3.0.0\nBundle-Vendor: SolarNetwork\nBundle-RequiredExecutionEnvironment: JavaSE-1.8\nBundle-Activator: net.solarnetwork.common.jdt.Activator\nExport-Package:\nnet.solarnetwork.common.jdt;version=\"2.0.0\"\nImport-Package:\nnet.solarnetwork.service;version=\"[1.0,2.0)\",\norg.eclipse.jdt.core.compiler,\norg.eclipse.jdt.internal.compiler,\norg.osgi.framework;version=\"[1.5,2.0)\",\norg.slf4j;version=\"[1.7,2.0)\",\norg.springframework.context;version=\"[5.3,6.0)\",\norg.springframework.core.io;version=\"[5.3,6.0)\",\norg.springframework.util;version=\"[5.3,6.0)\"\n
The rest of this document will describe this structure in more detail.
"},{"location":"developers/osgi/manifest/#versioning","title":"Versioning","text":"In OSGi plugins are always versioned and and Java packages may be versioned. Versions follow Semantic Versioning rules, generally using this syntax:
major.minor.patch\n
In the manifest example you can see the plugin version 3.0.0
declared in the Bundle-Version
attribute:
Bundle-Version: 3.0.0\n
The example also declares (exports) a net.solarnetwork.common.jdt
package for other plugins to import (use) as version 2.0.0
, in the Export-Package
attribute:
Export-Package:\nnet.solarnetwork.common.jdt;version=\"2.0.0\"\n
The example also uses (imports) a versioned package net.solarnetwork.service
using a version range greater than or equal to 1.0
and less than 2.0
and an unversioned package org.eclipse.jdt.core.compiler
, in the Import-Package
attribute:
Import-Package:\nnet.solarnetwork.service;version=\"[1.0,2.0)\",\norg.eclipse.jdt.core.compiler,\n
Tip
Some plugins, and core Java system packages, do not declare package versions. You should declare package versions in your own plugins.
"},{"location":"developers/osgi/manifest/#version-ranges","title":"Version ranges","text":"Some OSGi version attributes allow version ranges to be declared, such as the Import-Package
attribute. A version range is a comma-delimited lower,upper
specifier. Square brackets are used to represent inclusive values and round brackets represent exclusive values. A value can be omitted to reprsent an unbounded value. Here are some examples:
[1.0,2.0)
1.0.0 \u2264 x < 2.0.0 Greater than or equal to 1.0.0
and less than 2.0.0
(1,3)
1.0.0 < x < 3.0.0 Greater than 1.0.0
and less than 3.0.0
[1.3.2,)
1.3.2 \u2264 x Greater than or eequal to 1.3.2
1.3.2
1.3.2 \u2264 x Greater than or eequal to 1.3.2
(shorthand notation) Implied unbounded range
An inclusive lower, unbounded upper range can be specifeid using a shorthand notation of just the lower bound, like 1.3.2
.
Each plugin must provide the following attributes:
Attribute Example DescriptionBundle-ManifestVersion
2 declares the OSGi bundle manifest version; always 2
Bundle-Name
Awesome Data Source a concise human-readable name for the plugin Bundle-SymbolicName
com.example.awesome a machine-readable, universally unique identifier for the plugin Bundle-Version
1.0.0 the plugin version Bundle-RequiredExecutionEnvironment
JavaSE-1.8 a required OSGi execution environment"},{"location":"developers/osgi/manifest/#recommended-attributes","title":"Recommended attributes","text":"Each plugin is recommended to provide the following attributes:
Attribute Example DescriptionBundle-Description
An awesome data source that collects awesome data. a longer human-readable description of the plugin Bundle-Vendor
ACME Corp the name of the entity or organisation that authored the plugin"},{"location":"developers/osgi/manifest/#common-attributes","title":"Common attributes","text":"Other common manifest attributes are:
Attribute Example DescriptionBundle-Activator
com.example.awesome.Activator a fully-qualified Java class name that implements the org.osgi.framework.BundleActivator
interface, to handle plugin lifecycle events; see Activator for more information Export-Package
net.solarnetwork.common.jdt;version=\"2.0.0\" a package export list Import-Package
net.solarnetwork.service;version=\"[1.0,2.0)\" a package dependency list"},{"location":"developers/osgi/manifest/#package-dependencies","title":"Package dependencies","text":"A plugin must declare the Java packages it directly uses in a Import-Package
attribute. This attribute accpets a comma-delimited list of package specifications that take the basic form of:
PACKAGE;version=\"VERSION\"\n
For example here is how the net.solarnetwork.service
package, versioned between 1.0
and 2.0
, would be declared:
Import-Package: net.solarnetwork.service;version=\"[1.0,2.0)\"\n
Direct package use means your plugin has code that imports a class from a given package. Classes in an imported package may import other packages indirectly; you do not need to import those packages as well. For example if you have code like this:
import net.solarnetwork.service.OptionalService;\n
Then you will need to import the net.solarnetwork.service
package.
Note
The SolarNode platform automatically imports core Java packages like java.*
so you do not need to declare those.
Also note that in some scenarios a package used by a class in an imported package becomes a direct dependency. For example when you extend a class from an imported package and that class imports other packages. Those other packages may become direct dependencies that you also need to import.
"},{"location":"developers/osgi/manifest/#child-package-dependencies","title":"Child package dependencies","text":"If you import a package in your plugin, any child packages that may exist are not imported as well. You must import every individual package you need to use in your plugin.
For example to use both net.solarnetwork.service
and net.solarnetwork.service.support
you would have an Import-Package
attribute like this:
Import-Package:\nnet.solarnetwork.service;version=\"[1.0,2.0)\",\nnet.solarnetwork.service.support;version=\"[1.1,2.0)\"\n
"},{"location":"developers/osgi/manifest/#package-exports","title":"Package exports","text":"A plugin can export any package it provides, making the resources within that package available to other plugins to import and use. Declare exoprted packages with a Export-Package
attribute. This attribute takes a comma-delimited list of versioned package specifications. Note that version ranges are not supported: you must declare the exact version of the package you are exporting. For example:
Export-Package: com.example.service;version=\"1.0.0\"\n
Note
Exported packages should not be confused with services. Exported packages give other plugins access to the classes and any other resources within those packages, but do not provide services to the platform. You can use Blueprint to register services. Keep in mind that any service a plugin registers must exist within an exported package to be of any use.
"},{"location":"developers/services/backup-manager/","title":"Backup Manager","text":"The net.solarnetwork.node.backup.BackupManager
API provides SolarNode with a modular backup system composed of Backup Services that provide storage for backup data and Backup Resource Providers that contribute data to be backed up and support restoring backed up data.
The Backup Manager coordinates the creation and restoration of backups, delegating most of its functionality to the active Backup Service. The active Backup Service can be controlled through configuration.
The Backup Manager also supports exporting and importing Backup Archives, which are just .zip
archives using a defined folder structure to preserve all backup resources within a single backup.
This design of the SolarNode backup system makes it easy for SolarNode plugins to contribute resources to backups, without needing to know where or how the backup data is ultimately stored.
What goes in a Backup?
In SolarNode a Backup will contain all the critical settings that are unique to that node, such as:
The Backup Manager can be configured under the net.solarnetwork.node.backup.DefaultBackupManager
configuration namespace:
backupRestoreDelaySeconds
15 A number of seconds to delay the attempt of restoring a backup, when a backup has been previously marked for restoration. This delay gives the platform time to boot up and register the backup resource providers and other services required to perform the restore. preferredBackupServiceKey
net.solarnetwork.node.backup.FileSystemBackupService The key of the preferred (active) Backup Service to use."},{"location":"developers/services/backup-manager/#backup","title":"Backup","text":"The net.solarnetwork.node.backup.Backup
API defines a unique backup, created by a Backup Service. Backups are uniquely identified with a unique key assigned by the Backup Service that creates them.
A Backup
does not itself provide access to any of the resources associated with the backup. Instead, the getBackupResources()
method of BackupService
returns them.
The Backup Manager supports exporting and importing specially formed .zip
archives that contain a complete Backup. These archives are a convenient way to transfer settings from one node to another, and can be used to restore SolarNode on a new device.
The net.solarnetwork.node.backup.BackupResource
API defines a unique item within a Backup. A Backup Resource could be a file, a database table, or anything that can be serialized to a stream of bytes. Backup Resources are both provided by, and restored with, Backup Resource Providers so it is up to the Provider implementation to know how to generate and then restore the Resources it manages.
The net.solarnetwork.node.backup.BackupResourceProvider
API defines a service that can both generate and restore Backup Resources. Each implementation is identified by a unique key, typically the fully-qualified Java class name of the implementation.
When a Backup is created, all Backup Resource Provider services registered in SolarNode will be asked to contribute their Backup Resources, using the getBackupResources()
method.
When a Backup is restored, Backup Resources will be passed to their associated Provider with the restoreBackupResource(BackupResource)
method.
The net.solarnetwork.node.backup.BackupService
API defines the bulk of the SolarNode backup system. Each implementation is identified by a unique key, typically the fully-qualified Java class name of the implementation.
To create a Backup, use the performBackup(Iterable<BackupResource>)
method, passing in the collection of Backup Resources to include.
To list the available Backups, use the getAvailableBackups(Backup)
method.
To view a single Backup, use the backupForKey(String)
method.
To list the resources in a Backup, use the getBackupResources(Backup)
method.
SolarNode provides the net.solarnetwork.node.backup.FileSystemBackupService
default Backup Service implementation that saves Backup Archives to the node's own file system.
The net.solarnetwork.node.backup.s3
plugin provides the net.solarnetwork.node.backup.s3.S3BackupService
Backup Service implementation that saves all Backup data to AWS S3.
A plugin can publish a net.solarnetwork.service.CloseableService
and SolarNode will invoke the closeService()
method on it when that service is destroyed. This can be useful in some situations, to make sure resources are freed when a service is no longer needed.
Blueprint does provide the destroy-method
stop hook that can be used in many situations, however Blueprint does not allow this in all cases. For example a <bean>
nested within a <service>
element does not allow a destroy-method
:
<service interface=\"com.example.MyService\">\n<!-- destroy-method not allowed here: -->\n<bean class=\"com.example.MyComponent\"/>\n</service>\n
If MyComponent
also implemented CloseableService
then we can achieve the desired stop hook like this:
<service>\n<interfaces>\n<value>com.example.MyService</value>\n<value>net.solarnetwork.service.CloseableService</value>\n</interfaces>\n<bean class=\"com.example.MyComponent\"/>\n</service>\n
Note
Note that the above example CloseableService
is not strictly needed, as the same effect could be acheived by un-nesting the <bean>
from the <service>
element, like this:
<bean id=\"myComponent\" class=\"com.example.MyComponent\" destroy-method=\"close\"/>\n<service ref=\"myComponent\" interface=\"com.example.MyService\"/>\n
There are situations where un-nesting is not possible, which is where CloseableService
can be helpful.
The DatumDataSourcePollManagedJob
class is a Job Service implementation that can be used to let users schedule the generation of datum from a Datum Data Source. Typically this is configured as a Managed Service Factory so users can configure any number of job instances, each with their own settings.
Here is a typical example of a DatumDataSourcePollManagedJob
, in a fictional MyDatumDataSource
:
package com.example;\n\nimport java.time.Instant;\nimport java.util.Arrays;\nimport java.util.List;\nimport java.util.Map;\nimport net.solarnetwork.domain.datum.DatumSamples;\nimport net.solarnetwork.node.domain.datum.EnergyDatum;\nimport net.solarnetwork.node.domain.datum.NodeDatum;\nimport net.solarnetwork.node.domain.datum.SimpleEnergyDatum;\nimport net.solarnetwork.node.service.DatumDataSource;\nimport net.solarnetwork.node.service.support.DatumDataSourceSupport;\nimport net.solarnetwork.settings.SettingSpecifier;\nimport net.solarnetwork.settings.SettingSpecifierProvider;\nimport net.solarnetwork.settings.SettingsChangeObserver;\nimport net.solarnetwork.settings.support.BasicTextFieldSettingSpecifier;\n\n/**\n * Super-duper datum data source.\n *\n * @author matt\n * @version 1.0\n */\npublic class MyDatumDataSource extends DatumDataSourceSupport\nimplements DatumDataSource, SettingSpecifierProvider, SettingsChangeObserver {\n\nprivate String sourceId;\nprivate int level;\n\n@Override\npublic Class<? extends NodeDatum> getDatumType() {\nreturn EnergyDatum.class;\n}\n\n@Override\npublic EnergyDatum readCurrentDatum() {\nfinal String sourceId = resolvePlaceholders(this.sourceId);\nif ( sourceId == null || sourceId.isEmpty() ) {\nreturn null;\n}\nSimpleEnergyDatum d = new SimpleEnergyDatum(sourceId, Instant.now(), new DatumSamples());\nd.setWatts(level);\nreturn d;\n}\n\n@Override\npublic void configurationChanged(Map<String, Object> properties) {\n// the settings have changed; do something\n}\n\n@Override\npublic String getSettingUid() {\nreturn \"com.example.MyDatumDataSource\";\n}\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nreturn Arrays.asList(new BasicTextFieldSettingSpecifier(\"sourceId\", null),\nnew BasicTextFieldSettingSpecifier(\"level\", String.valueOf(0)));\n}\n\npublic String getSourceId() {\nreturn sourceId;\n}\n\npublic void setSourceId(String sourceId) {\nthis.sourceId = sourceId;\n}\n\npublic int getLevel() {\nreturn level;\n}\n\npublic void setLevel(int level) {\nthis.level = level;\n}\n\n}\n
title = Super-duper Datum Data Source\ndesc = This managed datum data source does it all.\n\nschedule.key = Schedule\nschedule.desc = The schedule to execute the job at. \\\nCan be either a number representing a frequency in <b>milliseconds</b> \\\nor a <a href=\"{0}\">cron expression</a>, for example <code>0 * * * * *</code>.\n\nsourceId.key = Source ID\nsourceId.desc = The source ID to use.\n\nlevel.key = Level\nlevel.desc = This one goes to 11.\n
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\nxmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\nxmlns:osgix=\"http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium\"\nxmlns:beans=\"http://www.springframework.org/schema/beans\"\nxsi:schemaLocation=\"\n http://www.osgi.org/xmlns/blueprint/v1.0.0\n http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\n http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium\n http://www.eclipse.org/gemini/blueprint/schema/blueprint-compendium/gemini-blueprint-compendium.xsd\n http://www.springframework.org/schema/beans\n http://www.springframework.org/schema/beans/spring-beans.xsd\">\n\n<!-- Service References -->\n\n<bean id=\"datumMetadataService\" class=\"net.solarnetwork.common.osgi.service.DynamicServiceTracker\">\n<argument ref=\"bundleContext\"/>\n<property name=\"serviceClassName\" value=\"net.solarnetwork.node.service.DatumMetadataService\"/>\n<property name=\"sticky\" value=\"true\"/>\n</bean>\n\n<bean id=\"datumQueue\" class=\"net.solarnetwork.common.osgi.service.DynamicServiceTracker\">\n<argument ref=\"bundleContext\"/>\n<property name=\"serviceClassName\" value=\"net.solarnetwork.node.service.DatumQueue\"/>\n<property name=\"sticky\" value=\"true\"/>\n</bean>\n\n<bean id=\"placeholderService\" class=\"net.solarnetwork.common.osgi.service.DynamicServiceTracker\">\n<argument ref=\"bundleContext\"/>\n<property name=\"serviceClassName\" value=\"net.solarnetwork.node.service.PlaceholderService\"/>\n<property name=\"sticky\" value=\"true\"/>\n</bean>\n\n<bean id=\"messageSource\" class=\"org.springframework.context.support.ResourceBundleMessageSource\">\n<property name=\"basenames\" value=\"com.example.MyDatumDataSource\"/>\n</bean>\n\n<bean id=\"jobMessageSource\" class=\"net.solarnetwork.support.PrefixedMessageSource\">\n<property name=\"prefix\" value=\"datumDataSource.\"/>\n<property name=\"delegate\" ref=\"messageSource\"/>\n</bean>\n\n<!-- Managed Service Factory for Datum Data Source -->\n\n<service interface=\"net.solarnetwork.settings.SettingSpecifierProviderFactory\">\n<bean class=\"net.solarnetwork.settings.support.BasicSettingSpecifierProviderFactory\">\n<property name=\"displayName\" value=\"Super-duper Datum Data Source\"/>\n<property name=\"factoryUid\" value=\"com.example.MyDatumDataSource\"/><!-- (1)! -->\n<property name=\"messageSource\" ref=\"messageSource\"/>\n</bean>\n</service>\n\n<osgix:managed-service-factory factory-pid=\"com.example.MyDatumDataSource\"\nautowire-on-update=\"true\" update-method=\"configurationChanged\">\n<osgix:interfaces>\n<beans:value>net.solarnetwork.node.job.ManagedJob</beans:value>\n</osgix:interfaces>\n<bean class=\"net.solarnetwork.node.job.SimpleManagedJob\"\ninit-method=\"serviceDidStartup\" destroy-method=\"serviceDidShutdown\">\n<argument>\n<bean class=\"net.solarnetwork.node.job.DatumDataSourcePollManagedJob\">\n<property name=\"datumMetadataService\" ref=\"datumMetadataService\"/>\n<property name=\"datumQueue\" ref=\"datumQueue\"/>\n<property name=\"datumDataSource\">\n<bean class=\"com.example.MyDatumDataSource\"><!-- (2)! -->\n<property name=\"datumMetadataService\" ref=\"datumMetadataService\"/>\n<property name=\"messageSource\" ref=\"jobMessageSource\"/>\n<property name=\"placeholderService\" ref=\"placeholderService\"/>\n</bean>\n</property>\n</bean>\n</argument>\n<argument value=\"0 * * * * ?\"/>\n<property name=\"serviceProviderConfigurations\"><!-- (3)! -->\n<map>\n<entry key=\"datumDataSource\">\n<bean class=\"net.solarnetwork.node.job.SimpleServiceProviderConfiguration\">\n<property name=\"interfaces\">\n<list>\n<value>net.solarnetwork.node.service.DatumDataSource</value>\n</list>\n</property>\n<property name=\"properties\">\n<map>\n<entry key=\"datumClassName\" value=\"net.solarnetwork.node.domain.datum.EnergyDatum\"/>\n</map>\n</property>\n</bean>\n</entry>\n</map>\n</property>\n</bean>\n</osgix:managed-service-factory>\n\n</blueprint>\n
factoryUid
is the same value as the getSettingsUid()
value in MyDatumDataSource.java
ManagedJob
that the Managed Service Factory registers.When this plugin is deployed in SolarNode, the managed component will appear on the main Settings page and then the component settings UI will look like this:
"},{"location":"developers/services/datum-data-source/","title":"Datum Data Source","text":"The DatumDataSource
API defines the primary way for plugins to generate datum instances from devices or services integrated with SolarNode, through a request-based API. The MultiDatumDataSource
API is closely related, and allows a plugin to generate multiple datum when requested.
package net.solarnetwork.node.service;\n\nimport net.solarnetwork.node.domain.datum.NodeDatum;\nimport net.solarnetwork.service.Identifiable;\n\n/**\n * API for collecting {@link NodeDatum} objects from some device.\n */\npublic interface DatumDataSource extends Identifiable, DeviceInfoProvider {\n\n/**\n * Get the class supported by this DataSource.\n *\n * @return class\n */\nClass<? extends NodeDatum> getDatumType();\n\n/**\n * Read the current value from the data source, returning as an unpersisted\n * {@link NodeDatum} object.\n *\n * @return Datum\n */\nNodeDatum readCurrentDatum();\n\n}\n
package net.solarnetwork.node.service;\n\nimport java.util.Collection;\nimport net.solarnetwork.node.domain.datum.NodeDatum;\nimport net.solarnetwork.service.Identifiable;\n\n/**\n * API for collecting multiple {@link NodeDatum} objects from some device.\n */\npublic interface MultiDatumDataSource extends Identifiable, DeviceInfoProvider {\n\n/**\n * Get the class supported by this DataSource.\n *\n * @return class\n */\nClass<? extends NodeDatum> getMultiDatumType();\n\n/**\n * Read multiple values from the data source, returning as a collection of\n * unpersisted {@link NodeDatum} objects.\n *\n * @return Datum\n */\nCollection<NodeDatum> readMultipleDatum();\n\n}\n
The Datum Data Source Poll Job provides a way to let users schedule the polling for datum from a data source.
"},{"location":"developers/services/datum-db/","title":"Datum Database","text":"TODO
"},{"location":"developers/services/datum-queue/","title":"Datum Queue","text":"SolarNode has a DatumQueue
service that acts as a central facility for processing all NodeDatum
captured by all data source plugins deployed in the SolarNode runtime. The queue can be configured with various filters that can augment, modify, or discard the datum. The queue buffers the datum for a short amount of time and then processes them sequentially in order of time, oldest to newest.
Datum data sources that use the Datum Data Source Poll Job are polled for datum on a recurring schedule and those datum are then posted to and stored in SolarNetwork. Data sources can also offer datum directly to the DatumQueue
if they emit datum based on external events. When offering datum directly, the datum can be tagged as transient and they will then still be processed by the queue but will not be posted/stored in SolarNetwork.
/**\n * Offer a new datum to the queue, optionally persisting.\n *\n * @param datum\n * the datum to offer\n * @param persist\n * {@literal true} to persist, or {@literal false} to only pass to\n * consumers\n * @return {@literal true} if the datum was accepted\n */\nboolean offer(NodeDatum datum, boolean persist);\n
"},{"location":"developers/services/datum-queue/#queue-observer","title":"Queue observer","text":"Plugins can also register observers on the DatumQueue
that are notified of each datum that gets processed. The addConsumer()
and removeConsumer()
methods allow you to register/deregister observers:
/**\n * Register a consumer to receive processed datum.\n *\n * @param consumer\n * the consumer to register\n */\nvoid addConsumer(Consumer<NodeDatum> consumer);\n\n/**\n * De-register a previously registered consumer.\n *\n * @param consumer\n * the consumer to remove\n */\nvoid removeConsumer(Consumer<NodeDatum> consumer);\n
Each observer will receive all datum, including transient datum. An example plugin that makes use of this feature is the SolarFlux Upload Service, which posts a copy of each datum to a MQTT server.
Here is a screen shot of the datum queue settings available in the SolarNode UI:
"},{"location":"developers/services/job-scheduler/","title":"Job Scheduler","text":"SolarNode provides a ManagedJobScheduler service that can automatically execute jobs exported by plugins that have user-defined schedules.
The Job Scheduler uses the Task Scheduler
The Job Scheduler service uses the Task Scheduler internally, which means the number of jobs that can execute simultaneously will be limited by its thread pool configuration.
"},{"location":"developers/services/job-scheduler/#managed-jobs","title":"Managed Jobs","text":"Any plugin simply needs to register a ManagedJob service for the Job Scheduler to automatically schedule and execute the job. The schedule is provided by the getSchedle()
method, which can return a cron expression or a plain number representing a millisecond period.
The net.solarnetwork.node.job.SimpleManagedJob
class implements ManagedJob
and can be used in most situations. It delegates the actual work to a net.solarnetwork.node.job.JobService
API, discussed in the next section.
The ManagedJob
API delegates the actual task work to a JobService
API. The executeJobService()
method will be invoked when the job executes.
Let's imagine you have a com.example.Job
class that you would like to allow users to schedule. Your class would implement the JobService
interface, and then you would provide a localized messages properties file and configure the service using OSGi Blueprint.
package com.example;\n\nimport java.util.Collections;\nimport java.util.List;\nimport net.solarnetwork.node.job.JobService;\nimport net.solarnetwork.node.service.support.BaseIdentifiable;\nimport net.solarnetwork.settings.SettingSpecifier;\n\n/**\n * My super-duper job.\n */\npublic class Job exetnds BaseIdentifiable implements JobService {\n@Override\npublic String getSettingUid() {\nreturn \"com.example.job\"; // (1)!\n}\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nreturn Collections.emptyList(); // (2)!\n}\n\n@Override\npublic void executeJobService() throws Exception {\n// do great stuff here!\n}\n}\n
SimpleManagedJob
class we'll configure in Blueprint XML will automatically add a schedule
setting to configure the job schedule.title = Super-duper Job\ndesc = This job does it all.\n\nschedule.key = Schedule\nschedule.desc = The schedule to execute the job at. \\\nCan be either a number representing a frequency in <b>milliseconds</b> \\\nor a <a href=\"{0}\">cron expression</a>, for example <code>0 * * * * *</code>.\n
<service interface=\"net.solarnetwork.node.job.ManagedJob\"><!-- (1)! -->\n<service-properties>\n<entry key=\"service.pid\" value=\"com.example.job\"/>\n</service-properties>\n<bean class=\"net.solarnetwork.node.job.SimpleManagedJob\"><!-- (2)! -->\n<argument>\n<bean class=\"com.example.Job\">\n<property name=\"uid\" value=\"com.example.job\"/><!-- (3)! -->\n<property name=\"messageSource\">\n<bean class=\"org.springframework.context.support.ResourceBundleMessageSource\">\n<property name=\"basenames\" value=\"com.example.Job\"/>\n</bean>\n</property>\n</bean>\n</argument>\n<property name=\"schedule\" value=\"0 * * * * *\"/>\n</bean>\n</service>\n
ManagedJob
service with the SolarNode runtime.SimpleManagedJob
class is a handy ManagedJob
implementation. It adds a schedule
setting to any settings returned by the JobService
.uid
value should match the service.pid
used earlier, which matches the value returned by the getSettingUid()
method in the Job
class.When this plugin is deployed in SolarNode, the component will appear on the main Settings page and offer a configurable Schedule setting, like this:
"},{"location":"developers/services/placeholder-service/","title":"Placeholder Service","text":"The Placeholder Service API provides components a way to resolve variables in strings, known as placeholders, whose values are managed outside the component itself. For example a datum data source plugin could use the Placeholder Service to support resolving placeholders in a configurable Source ID property.
SolarNode provides a Placeholder Service implementation that resolves both dynamic placeholders from the Settings Database (using the setting namespace placeholder
), and static placeholders from a configurable file or directory location.
Call the resolvePlaceholders(s, parameters)
method to resolve all placeholders on the String s
. The parameters
argument can be used to provide additional placeholder values, or you can pass just pass null
to rely solely on the placeholders available in the service already.
Here is an imaginary class that is constructed with an optional PlaceholderService
, and then when the go()
method is called uses that to resolve placeholders in the string {building}/temp
and return the result:
package com.example;\n\nimport net.solarnetwork.node.service.PlaceholderService;\nimport net.solarnetwork.service.OptionalService;\n\npublic class MyComponent {\n\nprivate final OptionalService<PlaceholderService> placeholderService;\n\npublic MyComponent(OptionalService<PlaceholderService> placeholderService) {\nsuper();\nthis.placeholderService = placeholderService;\n}\n\npublic String go() {\nreturn PlaceholderService.resolvePlaceholders(placeholderService,\n\"{building}/temp\", null);\n}\n}\n
"},{"location":"developers/services/placeholder-service/#blueprint","title":"Blueprint","text":"To use the Placeholder Service in your component, add either an Optional Service or explicit reference to your plugin's Blueprint XML file like this (depending on what your plugin requires):
Optional ServiceExplicit Reference<bean id=\"placeholderService\" class=\"net.solarnetwork.common.osgi.service.DynamicServiceTracker\">\n<argument ref=\"bundleContext\"/>\n<property name=\"serviceClassName\" value=\"net.solarnetwork.node.service.PlaceholderService\"/>\n<property name=\"sticky\" value=\"true\"/>\n</bean>\n
<reference id=\"placeholderService\" interface=\"net.solarnetwork.node.service.PlaceholderService\"/>\n
Then inject that service into your component's <bean>
, for example:
<bean id=\"myComponent\" class=\"com.example.MyComponent\">\n<argument ref=\"placeholderService\">\n</bean>\n
"},{"location":"developers/services/placeholder-service/#configuration","title":"Configuration","text":"The Placeholder Service supports the following configuration properties in the net.solarnetwork.node.core
namespace:
placeholders.dir
${CONF_DIR}/placeholders.d Path to a single propertites file or to a directory of properties files to load as static placeholder parameter values when SolarNode starts up."},{"location":"developers/services/settings-db/","title":"Settings Database","text":""},{"location":"developers/services/settings-service/","title":"Settings Service","text":"TODO
"},{"location":"developers/services/sql-database/","title":"SQL Database","text":"The SolarNode runtime provides a local SQL database that is used to hold application settings, data sampled from devices, or anything really. Some data is designed to live only in this local store (such as settings) while other data eventually gets pushed up into the SolarNet cloud. This document describes the most common aspects of the local database.
"},{"location":"developers/services/sql-database/#database-implementation","title":"Database implementation","text":"The database is provided by either the H2 or Apache Derby embedded SQL database engine.
Note
In SolarNodeOS the solarnode-app-db-h2 and solarnode-app-db-derby packages provide the H2 and Derby database implementations. Most modern SolarNode deployments use H2.
Typically the database is configured to run entirely within RAM on devices that support it, and the RAM copy is periodically synced to non-volatile media so if the device restarts the persisted copy of the database can be loaded back into RAM. This pattern works well because:
A standard JDBC stack is available and normal SQL queries are used to access the database. The Hikari JDBC connection pool provides a javax.sql.DataSource
for direct JDBC access. The pool is configured by factory configuration files in the net.solarnetwork.jdbc.pool.hikari
namespace. See the net.solarnetwork.jdbc.pool.hikari-solarnode.cfg as an example.
To make use of the DataSource
from a plugin using OSGi Blueprint you can declare a reference like this:
<reference id=\"dataSource\" interface=\"javax.sql.DataSource\" filter=\"(db=node)\" />\n
The net.solarnetwork.node.dao.jdbc bundle publishes some other JDBC services for plugins to use, such as:
org.springframework.jdbc.core.JdbcOperations
for slightly higher-level JDBC accessorg.springframework.transaction.PlatformTransactionManager
for JDBC transaction supportTo make use of these from a plugin using OSGi Blueprint you can declare references to these APIs like this:
<reference id=\"jdbcOps\" interface=\"org.springframework.jdbc.core.JdbcOperations\"\nfilter=\"(db=node)\" />\n\n<reference id=\"txManager\" interface=\"org.springframework.transaction.PlatformTransactionManager\"\nfilter=\"(db=node)\" />\n
"},{"location":"developers/services/sql-database/#high-level-access-data-access-object-dao","title":"High level access: Data Access Object (DAO)","text":"The SolarNode runtime also provides some Data Access Object (DAO) services that make storing some typical data easier:
net.solarnetwork.node.dao.SettingDao
for access to the Settings Databasenet.solarnetwork.node.dao.DatumDao
for access to the Datum DatabaseTo make use of these from a plugin using OSGi Blueprint you can declare references to these APIs like this:
<reference id=\"settingDao\" interface=\"net.solarnetwork.node.dao.SettingDao\"/>\n\n<reference id=\"datumDao\" interface=\"net.solarnetwork.node.dao.DatumDao\"/>\n
"},{"location":"developers/services/task-executor/","title":"Task Executor","text":"To support asynchronous task execution, SolarNode makes several thread-pool based services available to plugins:
java.util.concurrent.Executor
service for standard Runnable
task executionTaskExecutor
service for Runnable
task executionAsyncTaskExecutor
service for both Runnable
and Callable
task executionAsyncListenableTaskExecutor
service for both Runnable
and Callable
task execution that supports the org.springframework.util.concurrent.ListenableFuture
APINeed to schedule tasks?
See the Task Scheduler page for information on scheduling simple tasks, or the Job Scheduler page for information on scheduling managed jobs.
To make use of any of these services from a plugin using OSGi Blueprint you can declare a reference to them like this:
<reference id=\"taskExecutor\" interface=\"java.util.concurrent.Executor\"\nfilter=\"(function=node)\"/>\n\n<reference id=\"taskExecutor\" interface=\"org.springframework.core.task.TaskExecutor\"\nfilter=\"(function=node)\"/>\n\n<reference id=\"taskExecutor\" interface=\"org.springframework.core.task.AsyncTaskExecutor\"\nfilter=\"(function=node)\"/>\n\n<reference id=\"taskExecutor\" interface=\"org.springframework.core.task.AsyncListenableTaskExecutor\"\nfilter=\"(function=node)\"/>\n
"},{"location":"developers/services/task-executor/#thread-pool-configuration","title":"Thread pool configuration","text":"This thread pool is configured as a fixed-size pool with the number of threads set to the number of CPU cores detected at runtime, plus one. For example on a Raspberry Pi 4 there are 4 CPU cores so the thread pool would be configured with 5 threads.
"},{"location":"developers/services/task-scheduler/","title":"Task Scheduler","text":"To support asynchronous task scheduling, SolarNode provides a Spring TaskScheduler service to plugins.
The Job Scheduler
For user-configurable scheduled tasks, check out the Job Scheduler service.
To make use of any of this service from a plugin using OSGi Blueprint you can declare a reference like this:
<reference id=\"taskScheduler\" interface=\"org.springframework.scheduling.TaskScheduler\"\nfilter=\"(function=node)\"/>\n
"},{"location":"developers/services/task-scheduler/#configuration","title":"Configuration","text":"The Task Scheduler supports the following configuration properties in the net.solarnetwork.node.core
namespace:
jobScheduler.poolSize
10 The number of threads to maintain in the job scheduler, and thus the maximum number of jobs that can run simultaneously. Must be set to 1 or higher. scheduler.startupDelay
180 A delay in seconds after creating the job scheduler to start triggering jobs. This can be useful to give the application time to completely initialize before starting to run jobs. For example, to change the thread pool size to 20 and shorten the startup delay to 30 seconds, create an /etc/solarnode/services/net.solarnetwork.node.core.cfg
file with the following content:
jobScheduler.poolSize = 20\nscheduler.startupDelay = 30\n
"},{"location":"developers/settings/","title":"Settings","text":"SolarNode provides a way for plugin components to describe their user-configurable properties, called settings, to the platform. SolarNode provides a web-based UI that makes it easy for users to configure those components using a web browser. For example, here is a screen shot of the SolarNode UI showing a form for the settings of a Database Backup component:
The mechanism for components to describe themselves in this way is called the Settings API. Classes that wish to participate in this system publish metadata about their configurable properties through the Settings Provider API, and then SolarNode generates a UI form based on that metadata. Each form field in the previous example image is a Setting Specifier.
The process is similar to the built-in Settings app on iOS: iOS applications can publish configurable property definitions and the Settings app displays a UI that allows users to modify those properties.
"},{"location":"developers/settings/factory/","title":"Factory Service","text":""},{"location":"developers/settings/provider/","title":"Settings Provider","text":"The net.solarnetwork.settings.SettingSpecifierProvider
interface defines the way a class can declare themselves as a user-configurable component. The main elements of this API are:
public interface SettingSpecifierProvider {\n\n/**\n * Get a unique, application-wide setting ID.\n *\n * @return unique ID\n */\nString getSettingUid();\n\n/**\n * Get a non-localized display name.\n *\n * @return non-localized display name\n */\nString getDisplayName();\n\n/**\n * Get a list of {@link SettingSpecifier} instances.\n *\n * @return list of {@link SettingSpecifier}\n */\nList<SettingSpecifier> getSettingSpecifiers();\n\n}\n
The getSettingUid()
method defines a unique ID for the configurable component. By convention the class or package name of the component (or a derivative of it) is used as the ID.
The getSettingSpecifiers()
method returns a list of all the configurable properties of the component, as a list of Setting Specifier instances.
s
@Override\nprivate String username;\n\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>(1);\n\n// expose a \"username\" setting with a default value of \"admin\"\nresults.add(new BasicTextFieldSettingSpecifier(\"username\", \"admin\"));\n\nreturn results;\n}\n\n// settings are updated at runtime via standard setter methods\npublic void setUsername(String username) {\nthis.username = username;\n}\n
Setting values are treated as strings within the Settings API, but the methods associated with settings can accept any primitive or standard number type like int
or Integer
as well.
@Override\nprivate BigDecimal num;\n\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>(1);\n\nresults.add(new BasicTextFieldSettingSpecifier(\"num\", null));\n\nreturn results;\n}\n\n// settings will be coerced from strings into basic types automatically\npublic void setNum(BigDecimal num) {\nthis.num = num;\n}\n
"},{"location":"developers/settings/provider/#proxy-setting-accessors","title":"Proxy setting accessors","text":"Sometimes you might like to expose a simple string setting but internally treat the string as a more complex type. For example a Map
could be configured using a simple delimited string like key1 = val1, key2 = val2
. For situations like this you can publish a proxy setting that manages a complex data type as a string, and en/decode the complex type in your component accessor methods.
@Override\nprivate Map<String, String> map;\n\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>(1);\n\n// expose a \"mapping\" proxy setting for the map field\nresults.add(new BasicTextFieldSettingSpecifier(\"mapping\", null));\n\nreturn results;\n}\n\npublic void setMapping(String mapping) {\nthis.map = StringUtils.commaDelimitedStringToMap(mapping);\n}\n
"},{"location":"developers/settings/resource-handler/","title":"Setting Resource Handler","text":"The net.solarnetwork.node.settings.SettingResourceHandler
API defines a way for a component to import and export files uploaded to SolarNode from external sources.
A component could support importing a file using the File setting. This could be used, to provide a way of configuring the component from a configuration file, like CSV, JSON, XML, and so on. Similarly a component could support exporting a file, to generate a configuration file in another format like CSV, JSON, XML, and so on, from its current settings. For example, the Modbus Device Datum Source does exactly these things: importing and exporting a custom CSV file to make configuring the component easier.
"},{"location":"developers/settings/resource-handler/#importing","title":"Importing","text":"The main part of the SettingResourceHandler
API for importing files looks like this:
public interface SettingResourceHandler {\n\n/**\n * Get a unique, application-wide setting ID.\n *\n * <p>\n * This ID must be unique across all setting resource handlers registered\n * within the system. Generally the implementation will also be a\n * {@link net.solarnetwork.settings.SettingSpecifierProvider} for the same\n * ID.\n * </p>\n *\n * @return unique ID\n */\nString getSettingUid();\n\n/**\n * Apply settings for a specific key from a resource.\n *\n * @param settingKey\n * the setting key, generally a\n * {@link net.solarnetwork.settings.KeyedSettingSpecifier#getKey()}\n * value\n * @param resources\n * the resources with the settings to apply\n * @return any setting values that should be persisted as a result of\n * applying the given resources (never {@literal null}\n * @throws IOException\n * if any IO error occurs\n */\nSettingsUpdates applySettingResources(String settingKey, Iterable<Resource> resources)\nthrows IOException;\n
The getSettingUid()
method overlaps with the Settings Provider API, and as the comments note it is typical for a Settings Provider that publishes settings like File or Text Area to also implement SettingResourceHandler
.
The settingKey
passed to the applySettingResources()
method identifies the resource(s) being uploaded, as a single Setting Resource Handler might support multiple resources. For example a Settings Provider might publish multiple File settings, or File and Text Area settings. The settingKey
is used to differentiate between each one.
Imagine a component that publishes a File setting. A typical implementation of that component would look like this (this example omits some methods for brevity):
public class MyComponent implements SettingSpecifierProvider,\nSettingResourceHandler {\n\nprivate static final Logger log\n= LoggerFactory.getLogger(MyComponent.class);\n\n/** The resource key to identify the File setting resource. */\npublic static final String RESOURCE_KEY_DOCUMENT = \"document\";\n\n@Override\npublic String getSettingUid() {\nreturn \"com.example.mycomponent\";\n}\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>();\n\n// publish a File setting tied to the RESOURCE_KEY_DOCUMENT key,\n// allowing only text files to be accepted\nresults.add(new BasicFileSettingSpecifier(RESOURCE_KEY_DOCUMENT, null,\nnew LinkedHashSet<>(asList(\".txt\", \"text/*\")), false));\n\nreturn results;\n}\n\n@Override\npublic SettingsUpdates applySettingResources(String settingKey,\nIterable<Resource> resources) throws IOException {\nif ( resources == null ) {\nreturn null;\n}\nif ( RESOURCE_KEY_DOCUMENT.equals(settingKey) ) {\nfor ( Resource r : resources ) {\n// here we would do something useful with the resource... like\n// read into a string and log it\nString s = FileCopyUtils.copyToString(new InputStreamReader(\nr.getInputStream(), StandardCharsets.UTF_8));\n\nlog.info(\"Got {} resource content: {}\", settingKey, s);\n\nbreak; // only accept one file\n}\n}\nreturn null;\n}\n\n}\n
"},{"location":"developers/settings/resource-handler/#exporting","title":"Exporting","text":"The part of the Setting Resource Handler API that supports exporting setting resources looks like this:
/**\n * Get a list of supported setting keys for the\n * {@link #currentSettingResources(String)} method.\n *\n * @return the set of supported keys\n */\ndefault Collection<String> supportedCurrentResourceSettingKeys() {\nreturn Collections.emptyList();\n}\n\n/**\n * Get the current setting resources for a specific key.\n *\n * @param settingKey\n * the setting key, generally a\n * {@link net.solarnetwork.settings.KeyedSettingSpecifier#getKey()}\n * value\n * @return the resources, never {@literal null}\n */\nIterable<Resource> currentSettingResources(String settingKey);\n
The supportedCurrentResourceSettingKeys()
method returns a set of resource keys the component supports for exporting. The currentSettingResources()
method returns the resources to export for a given key.
The SolarNode UI shows a form menu with all the available resources for all components that support the SettingResourceHandler
API, and lets the user to download them:
Here is an example of a component that supports exporting a CSV file resource based on the component's current configuration:
public class MyComponent implements SettingSpecifierProvider,\nSettingResourceHandler {\n\n/** The setting resource key for a CSV configuration file. */\npublic static final String RESOURCE_KEY_CSV_CONFIG = \"csvConfig\";\n\nprivate int max = 1;\nprivate boolean enabled = true;\n\n@Override\npublic Collection<String> supportedCurrentResourceSettingKeys() {\nreturn Collections.singletonList(RESOURCE_KEY_CSV_CONFIG);\n}\n\n@Override\npublic Iterable<Resource> currentSettingResources(String settingKey) {\nif ( !RESOURCE_KEY_CSV_CONFIG.equals(settingKey) ) {\nreturn null;\n}\n\nStringBuilder buf = new StringBuilder();\nbuf.append(\"max,enabled\\r\\n\");\nbuf.append(max).append(',').append(enabled).append(\"\\r\\n\");\n\nreturn Collections.singleton(new ByteArrayResource(\nbuf.toString().getBytes(UTF_8), \"My Component CSV Config\") {\n\n@Override\npublic String getFilename() {\nreturn \"my-component-config.csv\";\n}\n\n});\n}\n}\n
"},{"location":"developers/settings/singleton/","title":"Singleton Service","text":""},{"location":"developers/settings/specifier/","title":"Setting Specifier","text":"The net.solarnetwork.settings.SettingSpecifier
API defines metadata for a single configurable property in the Settings API. The API looks like this:
public interface SettingSpecifier {\n\n/**\n * A unique identifier for the type of setting specifier this represents.\n *\n * <p>\n * Generally this will be a fully-qualified interface name.\n * </p>\n *\n * @return the type\n */\nString getType();\n\n/**\n * Localizable text to display with the setting's content.\n *\n * @return the title\n */\nString getTitle();\n\n}\n
This interface is very simple, and extended by more specialized interfaces that form more useful setting types.
Note
A SettingSpecifier
instance is often referred to simply as a setting.
Here is a view of the class hierarchy that builds off of this interface:
Note
The SettingSpecifier
API defines metadata about a configurable property, but not methods to view or change that property's value. The Settings Service provides methods for managing setting values.
The Settings Playpen plugin demonstrates most of the available setting types, and is a great way to see how the settings can be used.
"},{"location":"developers/settings/specifier/#text-field","title":"Text Field","text":"The TextFieldSettingSpecifier
defines a simple string-based configurable property and is the most common setting type. The setting defines a key
that maps to a setter method on its associated component class. In the SolarNode UI a text field is rendered as an HTML form text input, like this:
The net.solarnetwork.settings.support.BasicTextFieldSettingSpecifier
class provides the standard implementation of this API. A standard text field setting is created like this:
new BasicTextFieldSettingSpecifier(\"myProperty\", \"DEFAULT_VALUE\");\n\n// or without any default value\nnew BasicTextFieldSettingSpecifier(\"myProperty\", null);\n
Tip
Setting values are generally treated as strings within the Settings API, however other basic data types such as integers and numbers can be used as well. You can also publish a \"proxy\" setting that manages a complex data type as a string, and en/decode the complex type in your component accessor methods.
For example a Map<String, String>
setting could be published as a text field setting that en/decodes the Map
into a delimited string value, for example name=Test, color=red
.
The BasicTextFieldSettingSpecifier
can also be used for \"secure\" text fields where the field's content is obscured from view. In the SolarNode UI a secure text field is rendered as an HTML password form input like this:
A standard secure text field setting is created by passing a third true
argument, like this:
new BasicTextFieldSettingSpecifier(\"myProperty\", \"DEFAULT_VALUE\", true);\n\n// or without any default value\nnew BasicTextFieldSettingSpecifier(\"myProperty\", null, true);\n
"},{"location":"developers/settings/specifier/#title","title":"Title","text":"The TitleSettingSpecifier
defines a simple read-only string-based configurable property. The setting defines a key
that maps to a setter method on its associated component class. In the SolarNode UI the default value is rendered as plain text, like this:
The net.solarnetwork.settings.support.BasicTitleSettingSpecifier
class provides the standard implementation of this API. A standard title setting is created like this:
new BasicTitleSettingSpecifier(\"status\", \"Status is good.\", true);\n
"},{"location":"developers/settings/specifier/#html-title","title":"HTML Title","text":"The TitleSettingSpecifier
supports HTML markup. In the SolarNode UI the default value is rendered directly into HTML, like this:
// pass `true` as the 4th argument to enable HTML markup in the status value\nnew BasicTitleSettingSpecifier(\"status\", \"Status is <b>good</b>.\", true, true);\n
"},{"location":"developers/settings/specifier/#text-area","title":"Text Area","text":"The TextAreaSettingSpecifier
defines a simple string-based configurable property for a larger text value, loaded as an external file using the SettingResourceHandler API. In the SolarNode UI a text area is rendered as an HTML form text area with an associated button to upload the content, like this:
The net.solarnetwork.settings.support.BasicTextAreaSettingSpecifier
class provides the standard implementation of this API. A standard text field setting is created like this:
new BasicTextAreaSettingSpecifier(\"myProperty\", \"DEFAULT_VALUE\");\n\n// or without any default value\nnew BasicTextAreaSettingSpecifier(\"myProperty\", null);\n
"},{"location":"developers/settings/specifier/#direct-text-area","title":"Direct Text Area","text":"The BasicTextAreaSettingSpecifier
can also be used for \"direct\" text areas where the field's content is not uploaded as an external file. In the SolarNode UI a direct text area is rendered as an HTML form text area, like this:
A standard direct text area setting is created by passing a third true
argument, like this:
new BasicTextAreaSettingSpecifier(\"myProperty\", \"DEFAULT_VALUE\", true);\n\n// or without any default value\nnew BasicTextAreaSettingSpecifier(\"myProperty\", null, true);\n
"},{"location":"developers/settings/specifier/#toggle","title":"Toggle","text":"The ToggleSettingSpecifier
defines a boolean configurable property. In the SolarNode UI a toggle setting is rendered as an HTML form button, like this:
The net.solarnetwork.settings.support.BasicToggleSettingSpecifier
class provides the standard implementation of this API. A standard toggle setting is created like this:
new BasicToggleSettingSpecifier(\"enabled\", false); // default \"off\"\n\nnew BasicToggleSettingSpecifier(\"enabled\", true); // default \"on\"\n
"},{"location":"developers/settings/specifier/#slider","title":"Slider","text":"The SliderSettingSpecifier
defines a number-based configuration property with minimum and maximum values enforced, and a step limit. In the SolarNode UI a slider is rendered as an HTML widget, like this:
The net.solarnetwork.settings.support.BasicSliderSettingSpecifier
class provides the standard implementation of this API. A standard Slider setting is created like this:
// no default value, range between 0-11 in 0.5 increments\nnew BasicSliderSettingSpecifier(\"volume\", null, 0.0, 11.0, 0.5);\n\n// default value 5.0, range between 0-11 in 0.5 increments\nnew BasicSliderSettingSpecifier(\"volume\", 5.0, 0.0, 11.0, 0.5);\n
"},{"location":"developers/settings/specifier/#radio-group","title":"Radio Group","text":"The RadioGroupSettingSpecifier
defines a configurable property that accepts a single value from a fixed set of possible values. In the SolarNode UI a radio group is rendered as a set of HTML radio input form fields, like this:
The net.solarnetwork.settings.support.BasicRadioGroupSettingSpecifier
class provides the standard implementation of this API. A standard RadioGroup setting is created like this:
String[] vals = new String[] {\"a\", \"b\", \"c\"};\nString[] labels = new Strign[] {\"One\", \"Two\", \"Three\"};\nMap<String, String> radioValues = new LinkedHashMap<>(3);\nfor ( int i = 0; i < vals.length; i++ ) {\nradioValues.put(vals[i], labels[i]);\n}\nBasicRadioGroupSettingSpecifier radio =\nnew BasicRadioGroupSettingSpecifier(\"option\", vals[0]);\nradio.setValueTitles(radioValues);\n
"},{"location":"developers/settings/specifier/#multi-value","title":"Multi-value","text":"The MultiValueSettingSpecifier
defines a configurable property that accepts a single value from a fixed set of possible values. In the SolarNode UI a multi-value setting is rendered as an HTML select form field, like this:
The net.solarnetwork.settings.support.BasicMultiValueSettingSpecifier
class provides the standard implementation of this API. A standard MultiValue setting is created like this:
String[] vals = new String[] {\"a\", \"b\", \"c\"};\nString[] labels = new Strign[] {\"Option 1\", \"Option 2\", \"Option 3\"};\nMap<String, String> radioValues = new LinkedHashMap<>(3);\nfor ( int i = 0; i < vals.length; i++ ) {\nradioValues.put(vals[i], labels[i]);\n}\nBasicMultiValueSettingSpecifier menu = new BasicMultiValueSettingSpecifier(\"option\",\nvals[0]);\nmenu.setValueTitles(menuValues);\n
"},{"location":"developers/settings/specifier/#file","title":"File","text":"The FileSettingSpecifier
defines a file-based resource property, loaded as an external file using the SettingResourceHandler API. In the SolarNode UI a file setting is rendered as an HTML file input, like this:
The net.solarnetwork.node.settings.support.BasicFileSettingSpecifier
class provides the standard implementation of this API. A standard file setting is created like this:
// a single file only, no default content\nnew BasicFileSettingSpecifier(\"document\", null,\nnew LinkedHashSet<>(Arrays.asList(\".txt\", \"text/*\")), false);\n\n// multiple files allowed, no default content\nnew BasicFileSettingSpecifier(\"document-list\", null,\nnew LinkedHashSet<>(Arrays.asList(\".txt\", \"text/*\")), true);\n
"},{"location":"developers/settings/specifier/#dynamic-list","title":"Dynamic List","text":"A Dynamic List setting allows the user to manage a list of homogeneous items, adding or subtracting items as desired. The items can be literals like strings, or arbitrary objects that define their own settings. In the SolarNode UI a dynamic list setting is rendered as a pair of HTML buttons to remove and add items, like this:
A Dynamic List is often backed by a Java Collection
or array in the associated component. In addition a special size-adjusting accessor method is required, named after the setter method with Count
appended. SolarNode will use this accessor to request a specific size for the dynamic list.
private String[] names = new String[0];\n\npublic String[] getNames() {\nreturn names;\n}\n\npublic void setNames(String[] names) {\nthis.names = names;\n}\n\npublic int getNamesCount() {\nString[] l = getNames();\nreturn (l == null ? 0 : l.length);\n}\n\npublic void setNamesCount(int count) {\nsetNames(ArrayUtils.arrayOfLength(\ngetNames(), count, String.class, String::new));\n}\n
private List<String> names = new ArrayList<>();\n\npublic List<String> getNames() {\nreturn names;\n}\n\npublic void setNames(List<String> names) {\nthis.names = names;\n}\n\npublic int getNamesCount() {\nList<String> l = getNames();\nreturn (l == null ? 0 : l.size());\n}\n\npublic void setNamesCount(int count) {\nif ( count < 0 ) {\ncount = 0;\n}\nList<String> l = getNames();\nint lCount = (l == null ? 0 : l.size());\nwhile ( lCount > count ) {\nl.remove(l.size() - 1);\nlCount--;\n}\nif ( l == null && count > 0 ) {\nl = new ArrayList<>();\nsetNames(l);\n}\nwhile ( lCount < count ) {\nl.add(\"\");\nlCount++;\n}\n}\n
The SettingUtils.dynamicListSettingSpecifier()
method simplifies the creation of a GroupSettingSpecifier
that represents a dynamic list (the examples in the following sections demonstrate this).
A simple Dynamic List is a dynamic list of string or number values.
private String[] names = new String[0];\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>();\n\n// turn a list of strings into a Group of TextField settings\nGroupSettingSpecifier namesList = SettingUtils.dynamicListSettingSpecifier(\n\"names\", asList(names), (String value, int index, String key) ->\nsingletonList(new BasicTextFieldSettingSpecifier(key, null)));\nresults.add(namesList);\n\nreturn results;\n}\n
"},{"location":"developers/settings/specifier/#complex-dynamic-list","title":"Complex Dynamic List","text":"A complex Dynamic List is a dynamic list of arbitrary object values. The main difference in terms of the necessary settings structure required, compared to a Simple Dynamic List, is that a group-of-groups is used.
Complex data classDynamic List settingpublic class Person {\nprivate String firstName;\nprivate String lastName;\n\n// generate list of settings for a Person, nested under some prefix\npublic List<SettingSpecifier> settings(String prefix) {\nList<SettingSpecifier> results = new ArrayList<>(2);\nresults.add(new BasicTextFieldSettingSpecifier(prefix + \"firstName\", null));\nresults.add(new BasicTextFieldSettingSpecifier(prefix + \"lastName\", null));\nreturn results;\n}\n\npublic void setFirstName(String firstName) {\nthis.firstName = firstName;\n}\n\npublic void setLastName(String lastName) {\nthis.lastName = lastName;\n}\n}\n
private Person[] people = new Person[0];\n\n@Override\npublic List<SettingSpecifier> getSettingSpecifiers() {\nList<SettingSpecifier> results = new ArrayList<>();\n\n// turn a list of People into a Group of Group settings\nGroupSettingSpecifier peopleList = SettingUtils.dynamicListSettingSpecifier(\n\"people\", asList(people), (Person value, int index, String key) ->\nsingletonList(new BasicGroupSettingSpecifier(\nvalue.settings(key + \".\"))));\nresults.add(peopleList);\n\nreturn results;\n}\n
"},{"location":"users/","title":"User Guide","text":"This section of the handbook is geared towards users who will be deploying and managing one or more SolarNode devices.
See the Getting Started page to learn how to:
See the Setup App section to learn how to configure SolarNode.
"},{"location":"users/configuration/","title":"Configuration","text":"Some SolarNode components can be configured from properties files. This type of configuration is meant to be changed just once, when a SolarNode is first deployed, to alter some default configuration value.
Not to be confused with Settings
This type of configuration differs from what the Settings page in the Setup App provides a UI managing. This configuration might be created by system administrators when creating a custom SolarNodeOS image for their needs, while Settings are meant to be managed by end users.
Configuration properties files are read from the /etc/solarnode/services
directory and named like NAMESPACE.cfg
, where NAMESPACE
represents a configuration namespace.
Configuration location
The /etc/solarnode/services
location is the default location in SolarNodeOS. It might be another location in other SolarNode deployments.
Imagine a component uses the configuration namespace com.example.service
and supports a configurable property named max-threads
that accepts an integer value you would like to configure as 4
. You would create a com.example.service.cfg
file like:
max-threads = 4\n
"},{"location":"users/datum/","title":"Datum","text":"In SolarNetwork a datum is the fundamental time-stamped data structure collected by SolarNodes and stored in SolarNet. It is a collection of properties associated with a specific information source at a specific time.
Example plain language description of a datum
the temperature and humidity collected from my weather station at 1 Jan 2023 11:00 UTC
In this example datum description, we have all the comopnents of a datum:
Datum component Description node the (implied) node that collected the data properties temperature and humidity source my weather station time 1 Jan 2023 11:00 UTCA datum stream is the collection of datum from a single node for a single source over time.
A datum object is modeled as a flexible structure with the following core elements:
Element Type DescriptionnodeId
number A unique ID assigned to nodes by SolarNetwork. sourceId
string A node-unique identifier that defines a single stream of data from a specific source, up to 64 characters long. Certain characters are not allowed, see below. created
date A time stamp of when the datum was collected, or the date the datum is associated with. samples
datum samples The collected properties. A datum is uniquely identified by the three combined properties (nodeId
, sourceId
, created
).
Source IDs are user-defined strings used to distinguish between different information sources within a single node. For example, a node might collect data from an energy meter on source ID Meter
and a solar inverter on Solar
. SolarNetwork does not place any restrictions on source ID values, other than a 64-character limit. However, there is are some conventions used within SolarNetwork that are useful to follow, especially for larger deployment of nodes with many source IDs:
Meter1
is better than Schneider ION6200 Meter - Main Building
./S1/B1/M1
could imply the first meter in the first building on the first site.+
and #
characters should not be used. This is actually a constraint in the MQTT protocol used in parts of SolarNetwork, where the MQTT topic name includes the source ID. These characters are MQTT topic filter wildcards, and cannot be used in topic names.The path-like structure becomes useful in places where wildcard patterns are used, like security policies or datum queries. It is generally worthwhile spending some time planning on a source ID taxonomy to use when starting a new project with SolarNetwork.
"},{"location":"users/datum/#datum-samples","title":"Datum samples","text":"The properties included in a datum object are known as datum samples. The samples are modeled as a collection of named properties, for example the temperature and humidity properties in the earlier example datum could be represented like this:
Example representation of datum samples from a weather station source{\n\"temperature\" : 21.5,\n\"humidity\" : 68\n}\n
Another datum samples acquired from a power meter might look like this:
Example representation of datum samples from a power meter source{\n\"watts\" : 2150,\n\"wattHours\" : 6834834349,\n\"mode\" : \"auto\"\n}\n
"},{"location":"users/datum/#datum-property-classifications","title":"Datum property classifications","text":"The datum samples are actually further organized into three classifications:
Classification Key Description instantaneousi
a single reading, observation, or measurement that does not accumulate over time accumulating a
a reading that accumulates over time, like a meter or odometer status s
non-numeric data, like staus codes or error messages These classifications help SolarNetwork understand how to aggregate the datum samples over time. When SolarNode uploads a datum to SolarNetwork, the sample will include the classification of each property. The previous example would thus more accurately be represented like this:
Example representation of datum samples with classifications{\n\"i\": {\n\"watts\" : 2150 // (1)!\n},\n\"a\": {\n\"wattHours\" : 6834834349 // (2)!\n},\n\"s\": {\n\"mode\" : \"auto\" // (3)!\n}\n}\n
watts
is an instantaneous measurement of power that does not accumulatewattHours
is an accumulating measurement of the accrual of energy over timemode
is a status message that is not a numberNote
Sometimes these classifications will be hidden from you. For example SolarNetwork hides them when returning datum data from some SolarNetwork API methods. You might come across them in some SolarNode plugins that allow configuring dynamic sample properties to collect, when SolarNode does not implicitly know which classification to use. Some SolarNetwork APIs do return or require fully classified sample objects; the documentation for those services will make that clear.
"},{"location":"users/expressions/","title":"Expressions","text":"Many SolarNode components support a general \"expressions\" framework that can be used to calculate values using a scripting language. SolarNode comes with the Spel scripting language by default, so this guide describes that language.
A common use case for expressions is to derive datum property values out of the raw property values captured from a device. In the SolarNode Setup App a typical datum data source component might present a configurable list of expression settings like this:
In this example, each time the data source captures a datum from the device it is communicating with it will add a new watts
property by multiplying the captured amps
and volts
property values. In essence the expression is like this code:
watts = amps \u00d7 volts\n
"},{"location":"users/expressions/#datum-expressions","title":"Datum Expressions","text":"Many SolarNode expressions are evaluated in the context of a datum, typically one captured from a device SolarNode is collecting data from. In this context, the expression supports accessing datum properties directly as expression variables, and some helpful functions are provided.
"},{"location":"users/expressions/#datum-property-variables","title":"Datum property variables","text":"All datum properties with simple names can be referred to directly as variables. Here simple just means a name that is also a legal variable name. The property classifications do not matter in this context: the expression will look for properties in all classifications.
For example, given a datum like this:
Example datum representation in JSON{\n\"i\": {\n\"watts\" : 123\n},\n\"a\": {\n\"wattHours\" : 987654321\n},\n\"s\": {\n\"mode\" : \"auto\"\n}\n}\n
The expression can use the variables watts
, wattHours
, and mode
.
A datum expression will also provide the following variables:
Property Type Descriptiondatum
Datum
A Datum
object, in case you need direct access to the functions provided there. meta
DatumMetadataOperations
Get datum metadata for the current source ID. parameters
Map<String,Object>
Simple map-based access to all parameters passed to the expression. The available parameters depend on the context of the expression evaluation, but often include things like placeholder values or parameters generated by previously evaluated expressions. These values are also available directly as variables, this is rarely needed but can be helpful for accessing dynamically-calculated property names or properties with names that are not legal variable names. props
Map<String,Object>
Simple map based access to all properties in datum
. As datum properties are also available directly as variables, this is rarely needed but can be helpful for accessing dynamically-calculated property names or properties with names that are not legal variable names. sourceId
String
The source ID of the current datum."},{"location":"users/expressions/#functions","title":"Functions","text":"Some functions are provided to help with datum-related expressions.
"},{"location":"users/expressions/#bit-functions","title":"Bit functions","text":"The following functions help with bitwise integer manipulation operations:
Function Arguments Result Descriptionand(n1,n2)
Number
, Number
Number
Bitwise and, i.e. (n1 & n2)
andNot(n1,n2)
Number
, Number
Number
Bitwise and-not, i.e. (n1 & ~n2)
narrow(n,s)
Number
, Number
Number
Return n
as a reduced-size but equivalent number of a minimum power-of-two byte size s
narrow8(n)
Number
Number
Return n
as a reduced-size but equivalent number narrowed to a minimum of 8-bits narrow16(n)
Number
Number
Return n
as a reduced-size but equivalent number narrowed to a minimum of 16-bits narrow32(n)
Number
Number
Return n
as a reduced-size but equivalent number narrowed to a minimum of 32-bits narrow64(n)
Number
Number
Return n
as a reduced-size but equivalent number narrowed to a minimum of 64-bits not(n)
Number
Number
Bitwise not, i.e. (~n)
or(n1,n2)
Number
, Number
Number
Bitwise or, i.e. (n1 | n2)
shiftLeft(n,c)
Number
, Number
Number
Bitwise shift left, i.e. (n << c)
shiftRight(n,c)
Number
, Number
Number
Bitwise shift left, i.e. (n >> c)
testBit(n,i)
Number
, Number
boolean
Test if bit i
is set in integer n
, i.e. ((n & (1 << i)) != 0)
xor(n1,n2)
Number
, Number
Number
Bitwise xor, i.e. (n1 ^ n2)
Tip
All number arguments will be converted to BigInteger
values for the bitwise operations, and BigInteger
values are returned.
The following functions deal with datum streams. The latest()
and offset()
functions give you access to recently-captured datum from any SolarNode source, so you can refer to any datum stream being generated in SolarNode. They return another datum expression root object, which means you have access to all the variables and functions documented on this page with them as well.
hasLatest(source)
String
boolean
Returns true
if a datum with source ID source
is available via the latest(source)
function. hasLatestMatching(pattern)
String
Collection<DatumExpressionRoot>
Returns true
if latestMatching(pattern)
returns a non-empty collection. hasLatestOtherMatching(pattern)
String
Collection<DatumExpressionRoot>
Returns true
if latestOthersMatching(pattern)
returns a non-empty collection. hasMeta()
boolean
Returns true
if metadata for the current source ID is available. hasMeta(source)
String
boolean
Returns true
if datumMeta(source)
would return a non-null value. hasOffset(offset)
int
boolean
Returns true
if a datum is available via the offset(offset)
function. hasOffset(source,offset)
String
, int
boolean
Returns true
if a datum with source ID source
is available via the offset(source,int)
function. latest(source)
String
DatumExpressionRoot
Provides access to the latest available datum matching the given source ID, or null
if not available. This is a shortcut for calling offset(source,0)
. latestMatching(pattern)
String
Collection<DatumExpressionRoot>
Return a collection of the latest available datum matching a given source ID wildcard pattern. latestOthersMatching(pattern)
String
Collection<DatumExpressionRoot>
Return a collection of the latest available datum matching a given source ID wildcard pattern, excluding the current datum if its source ID happens to match the pattern. meta(source)
String
DatumMetadataOperations
Get datum metadata for a specific source ID. metaMatching(pattern)
String
Collection<DatumMetadataOperations>
Find datum metadata for sources matching a given source ID wildcard pattern. offset(offset)
int
DatumExpressionRoot
Provides access to a datum from the same stream as the current datum, offset by offset
in time, or null
if not available. Offset 1
means the datum just before this datum, and so on. offset(source,offset)
String
, int
DatumExpressionRoot
Provides access to an offset from the latest available datum matching the given source ID, or null
if not available. Offset 0
represents the \"latest\" datum, 1
the one before that, and so on. SolarNode only maintains a limited history for each source, do do not rely on more than a few datum to be available via this method. This history is also cleared when SolarNode restarts. selfAndLatestMatching(pattern)
String
Collection<DatumExpressionRoot>
Return a collection of the latest available datum matching a given source ID wildcard pattern, including the current datum.\u00a0The current datum will always be the first datum returned."},{"location":"users/expressions/#math-functions","title":"Math functions","text":"Expressions support basic math operators like +
for addition and *
for multiplication. The following functions help with other math operations:
avg(collection)
Collection<Number>
Number
Calculate the average (mean) of a collection of numbers. Useful when combined with the group(pattern)
function. ceil(n)
Number
Number
Round a number larger, to the nearest integer. ceil(n,significance)
Number
, Number
Number
Round a number larger, to the nearest integer multiple of significance
. down(n)
Number
Number
Round numbers towards zero, to the nearest integer. down(n,significance)
Number
, Number
Number
Round numbers towards zero, to the nearest integer multiple of significance
. floor(n)
Number
Number
Round a number smaller, to the nearest integer. floor(n,significance)
Number
, Number
Number
Round a number smaller, to the nearest integer multiple of significance
. max(collection)
Collection<Number>
Number
Return the largest value from a set of numbers. max(n1,n2)
Number
, Number
Number
Return the larger of two numbers. min(collection)
Collection<Number>
Number
Return the smallest value from a set of numbers. min(n1,n2)
Number
, Number
Number
Return the smaler of two numbers. mround(n,significance)
Number
, Number
Number
Round a number to the nearest integer multiple of significance
. round(n)
Number
Number
Round a number to the nearest integer. round(n,digits)
Number
, Number
Number
Round a number to the nearest number with digits
decimal digits. roundDown(n,digits)
Number
, Number
Number
Round a number towards zero to the nearest number with digits
decimal digits. roundUp(n,digits)
Number
, Number
Number
Round a number away from zero to the nearest number with digits
decimal digits. sum(collection)
Collection<Number>
Number
Calculate the sum of a collection of numbers. Useful when combined with the group(pattern)
function. up(n)
Number
Number
Round numbers away from zero, to the nearest integer. up(n,significance)
Number
, Number
Number
Round numbers away from zero, to the nearest integer multiple of significance
."},{"location":"users/expressions/#node-metadata-functions","title":"Node metadata functions","text":"All the Datum Metadata functions like metadataAtPath(path)
can be invoked directly, operating on the node's own metadata instead of a datum stream's metadata.
The following functions deal with general SolarNode operations:
Function Arguments Result DescriptionisOpMode(mode)
String
boolean
Returns true
if the mode
operational mode is active."},{"location":"users/expressions/#property-functions","title":"Property functions","text":"The following functions help with expression properties (variables):
Function Arguments Result Descriptionhas(name)
String
boolean
Returns true
if a property named name
is defined. Can be used to prevent expression errors on datum property variables that are missing. group(pattern)
String
Collection<Number>
Creates a collection out of numbered properties whose name
matches the given regular expression pattern
."},{"location":"users/expressions/#expression-examples","title":"Expression examples","text":"Let's assume a captured datum like this, expressed as JSON:
{\n\"i\" : {\n\"amps\" : 4.2,\n\"volts\" : 240.0\n},\n\"a\" : {\n\"reading\" : 38009138\n},\n\"s\" : {\n\"state\" : \"Ok\"\n}\n}\n
Then here are some example Spel expressions and the results they would produce:
Expression Result Commentstate
Ok
Returns the state
status property directly, which is Ok
. datum.s['state']
Ok
Returns the state
status property explicitly. props['state']
Ok
Same result as datum.s['state']
but using the short-cut props
accessor. amps * volts
1008.0
Returns the result of multiplying the amps
and volts
properties together: 4.2 \u00d7 240.0 = 1008.0
."},{"location":"users/expressions/#datum-stream-history","title":"Datum stream history","text":"Building on the previous example datum, let's assume an earlier datum for the same source ID had been collected with these properties (the classifications have been omitted for brevity):
{\n\"amps\" : 3.1,\n\"volts\" : 241.0,\n\"reading\" : 38009130,\n\"state\" : \"Ok\"\n}\n
Then here are some example expressions and the results they would produce given the original datum example:
Expression Result CommenthasOffset(1)
true
Returns true
because of the earlier datum that is available. hasOffset(2)
false
Returns false
because only one earlier datum is available. amps - offset(1).amps
1.1
Computes the difference between the current and previous amps
properties, which is 4.2 - 3.1 = 1.1
."},{"location":"users/expressions/#other-datum-stream-history","title":"Other datum stream history","text":"Other datum stream histories collected by SolarNode can also be accessed via the offset(source,offset)
function. Let's assume SolarNode is collecting a datum stream for the source ID solar
, and had amassed the following history, in newest-to-oldest order:
[\n{\"amps\" : 6.0, \"volts\" : 240.0 },\n{\"amps\" : 5.9, \"volts\" : 239.9 }\n]\n
Then here are some example expressions and the results they would produce given the original datum example:
Expression Result CommenthasLatest('solar')
true
Returns true
because of a datum for source solar
is available. hasOffset('solar',2)
false
Returns false
because only one earlier datum from the latest with source solar
is available. (amps * volts) - (latest('solar').amps * latest('solar').volts)
432.0
Computes the difference in energy between the latest solar
datum and the current datum, which is (6.0 \u00d7 240.0) - (4.2 \u00d7 240.0) = 432.0
. If we add another datum stream for the source ID solar1
like this:
[\n{\"amps\" : 1.0, \"volts\" : 240.0 }\n]\n
If we also add another datum stream for the source ID solar2
like this:
[\n{\"amps\" : 3.0, \"volts\" : 240.0 }\n]\n
Then here are some example expressions and the results they would produce given the previous datum examples:
Expression Result Commentsum(latestMatching('solar*').?[amps>1].![amps * volts])
2160
Returns the sum power of the latest solar
and solar2
datum. The solar1
power is omitted because its amps
property is not greater than 1
, so we end up with (6 * 240) + (3 * 240) = 2160
."},{"location":"users/expressions/#datum-metadata","title":"Datum metadata","text":"Some functions return DatumMetadataOperations
objects. These objects provide metadata for things like a specific source ID on SolarNode.
The properties available on datum metadata objects are:
Property Type Descriptionempty
boolean
Is true
if the metadata does not contain any values. info
Map<String,Object>
Simple map based access to the general metadata (e.g. the keys of the m
metadata map). infoKeys
Set<String>
The set of general metadata keys available (e.g. the keys of the m
metadata map). propertyInfoKeys
Set<String>
The set of property metadata keys available (e.g. the keys of the pm
metadata map). tags
Set<String>
A set of tags associated with the metadata."},{"location":"users/expressions/#datum-metadata-general-info-functions","title":"Datum metadata general info functions","text":"The following functions available on datum metadata objects support access to the general metadata (e.g. the m
metadata map):
getInfo(key)
String
Object
Get the general metadata value for a specific key. getInfoNumber(key)
String
Number
Get a general metadata value for a specific key as a Number
. Other more specific number value functions are also available such as getInfoInteger(key)
or getInfoBigDecimal(key)
. getInfoString(key)
String
String
Get a general metadata value for a specific key as a String
. hasInfo(key)
String
boolean
Returns true
if a non-null general metadata value exists for the given key."},{"location":"users/expressions/#datum-metadata-property-info-functions","title":"Datum metadata property info functions","text":"The following functions available on datum metadata objects support access to the property metadata (e.g. the pm
metadata map):
getPropertyInfo(prop)
String
Map<String,Object>
Get the property metadata for a specific property. getInfoNumber(prop,key)
String
, String
Number
Get a property metadata value for a specific property and key as a Number
. Other more specific number value functions are also available such as getInfoInteger(prop,key)
or getInfoBigDecimal(prop,key)
. getInfoString(prop,key)
String
, String
String
Get a property metadata value for a specific property and key as a String
. hasInfo(prop,key)
String
, String
String
Returns true
if a non-null property metadata value exists for the given property and key."},{"location":"users/expressions/#datum-metadata-global-functions","title":"Datum metadata global functions","text":"The following functions available on datum metadata objects support access to both general and property metadata:
Function Arguments Result DescriptiondiffersFrom(metadata)
DatumMetadataOperations
boolean
Returns true
if the given metadata has any different values than the receiver. hasTag(tag)
String
boolean
Returns true
if the given tag is available. metadataAtPath(path)
String
Object
Get the metadata value at a metadata key path. hasMetadataAtPath(path)
String
boolean
Returns true
if metadataAtPath(path)
would return a non-null value."},{"location":"users/getting-started/","title":"Getting Started","text":"This section describes how to get SolarNode running on a device. You will need to configure your device as a SolarNode and associate your SolarNode with SolarNetwork.
Tip
You might find it helpful to read through this entire guide before jumping in. There are screen shots and tips provided to help you along the way.
"},{"location":"users/getting-started/#get-your-device-ready-to-use","title":"Get your device ready to use","text":"SolarNode can run on a variety of devices. To get started using SolarNode, you must download the appropriate SolarNodeOS image for your device. SolarNodeOS is a complete operating system tailor made for SolarNode. Choose the SolarNodeOS image for the device you want to run SolarNode on and then copy that image to your device media (typically an SD card).
"},{"location":"users/getting-started/#choose-your-device","title":"Choose your device","text":"Raspberry PiOrange PiSomething ElseThe Raspberry Pi is the best supported option for general SolarNode deployments. Models 3 or later, Compute Module 3 or later, and Zero 2 W or later are supported. Use a tool like Etcher or Raspberry Pi Imager to copy the image to an SD card (minimum size is 2 GB, 4 GB recommended).
Download SolarNodeOS for Raspberry Pi
The Orange Pi models Zero and Zero Plus are supported. Use a tool like Etcher to copy the image to an SD card (minimum size is 1 GB, 4 GB recommended).
Download SolarNodeOS for Orange Pi
Looking for SolarNodeOS for a device not listed here? Reach out to us through email or Slack to see if we can help!
"},{"location":"users/getting-started/#configure-your-network","title":"Configure your network","text":"SolarNode needs a network connection. If your device has an ethernet port, that is the most reliable way to get started: just plug in your ethernet cable and off you go!
If you want to use WiFi, or would like more detailed information about SolarNode's networking options, see the Networking sections.
"},{"location":"users/getting-started/#power-it-on","title":"Power it on","text":"Insert your SD card (or other device media) into your device, and power it on. While it starts up, proceed with the next steps.
"},{"location":"users/getting-started/#associate-your-solarnode-with-solarnetwork","title":"Associate your SolarNode with SolarNetwork","text":"Every SolarNode must be associated (registered) with a SolarNetwork account. To associate a SolarNode, you must:
If you do not already have a SolarNetwork account, register for one and then log in.
"},{"location":"users/getting-started/#generate-a-solarnode-invitation","title":"Generate a SolarNode invitation","text":"Click on the My Nodes link. You will see an Invite New SolarNode button, like this:
Click the Invite New SolarNode button, then fill in and submit the form that appears and select your time zone by clicking on the world map:
The generated SolarNode invitation will appear next.
Select and copy the entire invitation. You will need to paste that into the SolarNode setup screen in the next section.
"},{"location":"users/getting-started/#accept-the-invitation-on-solarnode","title":"Accept the invitation on SolarNode","text":"Open the SolarNode Setup app in your browser. The URL to use might be http://solarnode/ or it might be an IP address like http://192.168.1.123
. See the Networking section for more information. You will be greeted with an invitation acceptance form into which you can paste the invitation you generated in SolarNetwork. The acceptance process goes through the following steps:
First you submit the invitation in the acceptance form.
Next you preview the invitation details.
Note
The expected SolarNetwork Service value shown in this step will be in.solarnetwork.net
.
Finally, confirm the invitation. This step contacts SolarNetwork and completes the association process.
Warning
Ensure you provide a Certificate Password on this step, so SolarNetwork can generate a security certificate for your SolarNode.
When these steps are completed, SolarNetwork will have assigned your SolarNode a unique identifier known as your Node ID. A randomly generated SolarNode login password will have been generated; you are given the opportunity to easily change that if you prefer.
"},{"location":"users/getting-started/#next-steps","title":"Next steps","text":"Learn more about the SolarNode Setup app.
"},{"location":"users/logging/","title":"Logging","text":"Logging in SolarNode is configured in the /etc/solarnode/log4j2.xml
file, which is in the log4j configuration format. The default configuration in SolarNodeOS sets the overall verbosity to INFO
and logs to a temporary storage area /run/solarnode/log/solarnode.log
.
Log messages have the following general properties:
Component Example Description Timestamp2022-03-15 09:05:37,029
The date/time the message was generated. Note the format of the timestamp depends on the logging configuration; the SolarNode default is shown in this example. Level INFO
The severity/verbosity of the message (as determined by the developer). This is an enumeration, and from least-to-most severe: TRACE
, DEBUG
, INFO
, WARN
, ERROR
. The level of a given logger allows messages with that level or higher to be logged, while lower levels are skipped. The default SolarNode configuration sets the overal level to INFO
, so TRACE
and DEBUG
messages are not logged. Logger ModbusDatumDataSource
A category or namespace associated with the message. Most commonly these equate to Java class names, but can be any value and is determined by the developer. Periods in the logger name act as a delimiter, forming a hierarchy that can be tuned to log at different levels. For example, given the default INFO
level, configuring the net.solarnetwork.node.io.modbus
logger to DEBUG
would turn on debug-level logging for all loggers in the Modbus IO namespace. Note that the default SolarNode configuration logs just a fixed number of the last characters of the logger name. This can be changed in the configuration to log more (or all) of the name, as desired. Message Error reading from device.
The message itself, determined by the developer. Exception Some messages include an exception stack trace, which shows the runtime call tree where the exception occurred."},{"location":"users/logging/#logger-namespaces","title":"Logger namespaces","text":"The Logger component outlined in the previous section allows a lot of flexibility to configure what gets logged in SolarNode. Setting the level on a given namespace impacts that namespace as well as all namespaces beneath it, meaning all other loggers that share the same namespace prefix.
For example, imagine the following two loggers exist in SolarNode:
net.solarnetwork.node.io.modbus.serial.SerialModbusNetwork
net.solarnetwork.node.io.modbus.util.ModbusUtils
Given the default configuration sets the default level to INFO
, we can turn in DEBUG
logging for both of these by adding a <Logger>
line like the following within the <Loggers>
element:
<Logger name=\"net.solarnetwork.node.io.modbus\" level=\"debug\"/>\n
That turns on DEBUG
for both loggers because they are both children of the net.solarnetwork.node.io.modbus
namespace. We could turn on TRACE
logging for one of them like this:
<Logger name=\"net.solarnetwork.node.io.modbus\" level=\"debug\"/>\n<Logger name=\"net.solarnetwork.node.io.modbus.serial\" level=\"trace\"/>\n
That would also turn on TRACE
for any other loggers in the net.solarnetwork.node.io.modbus.serial
namespace. You can limit the configuration all the way down to a full logger name if you like, for example:
<Logger name=\"net.solarnetwork.node.io.modbus\" level=\"debug\"/>\n<Logger name=\"net.solarnetwork.node.io.modbus.serial.SerialModbusNetwork\" level=\"trace\"/>\n
"},{"location":"users/logging/#logging-ui","title":"Logging UI","text":"The SolarNode UI supports configuring logger levels dynamically, without having to change the logging configuration file. See the Setup App / Settings / Logging page for more information.
"},{"location":"users/logging/#storage-constraints","title":"Storage constraints","text":"The default SolarNode configuration automatically rotates log files based on size, and limits the number of historic log files kept around, to that its associated storage space is not filled up. When a log file reaches the file limit, it is renamed to include a -i.log
suffix, where i
is an offset from the current log. The default configuration sets the maximum log size to 1 MB and limits the number of historic files to 3.
You can also adjust how much history is saved by tweaking the <SizeBasedTriggeringPolicy>
and <DefaultRolloverStrategy>
configuration. For example to change to a limit of 9 historic files of at most 5 MB each, the configuration would look like this:
<Policies>\n<SizeBasedTriggeringPolicy size=\"5 MB\"/>\n</Policies>\n<DefaultRolloverStrategy max=\"9\"/>\n
"},{"location":"users/logging/#persistent-logging","title":"Persistent logging","text":"By default SolarNode logs to temporary (RAM) storage that is discarded when the node reboots. The configuration can be changed so that logs are written directly to persistent storage if you would like to have the logs persisted across reboots, or would like to preserve more log history than can be stored in the temporary storage area.
To make this change, update the <RollingFile>
element's fileName
and/or filePattern
attributes to point to a persistent filesystem. SolarNode already has write permission to the /var/lib/solarnode/var
directory, so an easy location to use is /var/lib/solarnode/var/log
, like this:
<RollingFile name=\"File\"\nimmediateFlush=\"false\"\nfileName=\"/var/lib/solarnode/var/log/solarnode.log\"\nfilePattern=\"/var/lib/solarnode/var/log/solarnode-%i.log\">\n
Warning
This configuration can add a lot of stress to the node's storage medium, and may shorten its useful life. Consumer-grade SD cards in particular can fail quickly if SolarNode is writting a lot of information, such as verbose logging. Use of this configuration should be used with caution.
"},{"location":"users/logging/#logging-example-split-across-multiple-files","title":"Logging example: split across multiple files","text":"Sometimes it can be useful to turn on verbose logging for some area of SolarNode, but have those messages go to a different file so they don't clog up the main solarnode.log
file. This can be done by configuring additional appender configurations.
The following example logging configuration creates the following log files:
/var/log/solarnode/solarnode.log
- the main log/var/log/solarnode/filter.log
- filter logging/var/log/solarnode/mqtt-solarin.log
- MQTT wire logging to SolarIn/var/log/solarnode/mqtt-solarflux.log
- MQTT wire logging to SolarFluxFirst you must create the /var/log/solarnode
directory and give SolarNode permission to write there:
sudo mkdir /var/log/solarnode\nsudo chgrp solar /var/log/solarnode\nsudo chmod g+w /var/log/solarnode\n
Then edit the /etc/solarnode/log4j2.xml
file to hold the following (adjust according to your needs):
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Configuration status=\"WARN\">\n<Appenders>\n<RollingFile name=\"File\"\nimmediateFlush=\"true\"\nfileName=\"/var/log/solarnode/solarnode.log\"\nfilePattern=\"/var/log/solarnode/solarnode-%i.log\"><!-- (1)! -->\n<PatternLayout pattern=\"%d{DEFAULT} %-5p %40.40c; %msg%n\"/>\n<Policies>\n<SizeBasedTriggeringPolicy size=\"5 MB\"/>\n</Policies>\n<DefaultRolloverStrategy max=\"9\"/>\n</RollingFile>\n<RollingFile name=\"Filter\"\nimmediateFlush=\"false\"\nfileName=\"/var/log/solarnode/filter.log\"\nfilePattern=\"/var/log/solarnode/filter-%i.log\"><!-- (2)! -->\n<PatternLayout pattern=\"%d{DEFAULT} %-5p %40.40c; %msg%n\"/>\n<Policies>\n<SizeBasedTriggeringPolicy size=\"10 MB\"/>\n</Policies>\n<DefaultRolloverStrategy max=\"9\"/>\n</RollingFile>\n<RollingFile name=\"MQTT\"\nimmediateFlush=\"false\"\nfileName=\"/var/log/solarnode/mqtt.log\"\nfilePattern=\"/var/log/solarnode/mqtt-%i.log\"><!-- (3)! -->\n<PatternLayout pattern=\"%d{DEFAULT} %-5p %40.40c; %msg%n\"/>\n<Policies>\n<SizeBasedTriggeringPolicy size=\"10 MB\"/>\n</Policies>\n<DefaultRolloverStrategy max=\"9\"/>\n</RollingFile>\n<RollingFile name=\"Flux\"\nimmediateFlush=\"false\"\nfileName=\"/var/log/solarnode/flux.log\"\nfilePattern=\"/var/log/solarnode/flux-%i.log\"><!-- (4)! -->\n<PatternLayout pattern=\"%d{DEFAULT} %-5p %40.40c; %msg%n\"/>\n<Policies>\n<SizeBasedTriggeringPolicy size=\"10 MB\"/>\n</Policies>\n<DefaultRolloverStrategy max=\"9\"/>\n</RollingFile>\n</Appenders>\n<Loggers>\n<Logger name=\"org.eclipse.gemini.blueprint.blueprint.container.support\" level=\"warn\"/>\n<Logger name=\"org.eclipse.gemini.blueprint.context.support\" level=\"warn\"/>\n<Logger name=\"org.eclipse.gemini.blueprint.service.importer.support\" level=\"warn\"/>\n<Logger name=\"org.springframework.beans.factory\" level=\"warn\"/>\n\n<Logger name=\"net.solarnetwork.node.datum.filter\" level=\"trace\" additivity=\"false\">\n<AppenderRef ref=\"Filter\"/><!-- (5)! -->\n</Logger>\n\n<Logger name=\"net.solarnetwork.mqtt.queue\" level=\"trace\" additivity=\"false\">\n<AppenderRef ref=\"MQTT\"/>\n</Logger>\n\n<Logger name=\"net.solarnetwork.mqtt.influx\" level=\"trace\" additivity=\"false\">\n<AppenderRef ref=\"Flux\"/>\n</Logger>\n\n<Root level=\"info\">\n<AppenderRef ref=\"File\"/><!-- (6)! -->\n</Root>\n</Loggers>\n</Configuration>\n
File
appender is the \"main\" application log where most logs should go.Filter
appender is where we want net.solarnetwork.node.datum.filter
messages to go.MQTT
appender is where we want net.solarnetwork.mqtt.queue
messages to go.Flux
appender is where we want net.solarnetwork.mqtt.influx
messages to go.additivity=\"false\"
and add the <AppenderRef>
element that refereneces the specific appender name we want the log messages to go to. The additivity=false
attribute means the log messages will only go to the Filter
appender, instead of also going to the root-level File
appender.Filter
, MQTT
, and Flux
appenders above.The various <AppenderRef>
elements configure the appender name to write the messages to.
The various additivity=\"false\"
attributes disable appender additivity which means the log message will only be written to one appender, instead of being written to all configured appenders in the hierarchy (for example the root-level appender).
The immediateFlush=\"false\"
turns on buffered logging, which means log messages are buffered in RAM before being flushed to disk. This is more forgiving to the disk, at the expense of a delay before the messages appear.
MQTT wire logging means the raw MQTT packets send and received over MQTT connections will be logged in an easy-to-read but very verbose format. For the MQTT wire logging to be enabled, it must be activated with a special configuration file. Create the /etc/solarnode/services/net.solarnetwork.common.mqtt.netty.cfg
file with this content:
wireLogging = true\n
"},{"location":"users/logging/#mqtt-wire-log-namespace","title":"MQTT wire log namespace","text":"MQTT wire logs use a namespace prefix net.solarnetwork.mqtt.
followed by the connection's host name or IP address and port. For example SolarIn messages would use net.solarnetwork.mqtt.queue.solarnetwork.net:8883
and SolarFlux messages would use net.solarnetwork.mqtt.influx.solarnetwork.net:8884
.
SolarNode will attempt to automatically configure networking access from a local DHCP server. For many deployments the local network router is the DHCP server. SolarNode will identify itself with the name solarnode
, so in many cases you can reach the SolarNode setup app at http://solarnode/.
To find what network address SolarNode is using, you have a few options:
"},{"location":"users/networking/#consult-your-network-router","title":"Consult your network router","text":"Your local network router is very likely to have a record of SolarNode's network connection. Log into the router's management UI and look for a device named solarnode
.
If your SolarNode supports connecting a keyboard and screen, you can log into the SolarNode command line console and run ip -br addr
to print out a brief summary of the current networking configuration:
$ ip -br addr\n\nlo UNKNOWN 127.0.0.1/8 ::1/128\neth0 UP 192.168.0.254/24 fe80::e65f:1ff:fed1:893c/64\nwlan0 DOWN\n
In the previous output, SolarNode has an ethernet device eth0
with a network address 192.168.0.254
and a WiFi device wlan0
that is not connected. You could reach that SolarNode at http://192.168.0.254/
.
Tip
You can get more details by running ip addr
(without the -br
argument).
If your device will use WiFi for network access, you will need to configure the network name and credentials to use. You can do that by creating a wpa_supplicant.conf
file on the SolarNodeOS media (typically an SD card). For Raspberry Pi media, you can mount the SD card on your computer and it will mount the appropriate drive for you.
Once mounted use your favorite text editor to create a wpa_supplicant.conf
file with content like this:
country=nz\nnetwork={\n ssid=\"wifi network name here\"\n psk=\"wifi password here\"\n}\n
Change the country=nz
to match your own country code.
SolarNode supports a concept called operational modes. Modes are simple names like quiet
and hyper
that can be either active or inactive. Any number of modes can be active at a given time. In theory both quiet
and hyper
could be active simultaneously. Modes can be named anything you like.
Modes can be used by SolarNode components to alter their behavior dynamically. For example a data source component might stop collecting data from a set of data sources if the quiet
mode is active, or start collecting data at an increased frequency if hyper
is active. Some components might require specific names, which are described in their documentation. Components that allow configuring a required operational mode setting can also invert the requirement by adding a !
prefix to the mode name, for example !hyper
can be thought of as \"when hyper
is not active\". You can also specify exactly !
to match only when no mode is active.
Datum Filters also make use of operational modes, to toggle filters on and off dynamically.
"},{"location":"users/op-modes/#automatic-expiration","title":"Automatic expiration","text":"Operational modes can be activated with an associated expiration date. The mode will remain active until the expiration date, at which time it will be automatically deactivated. A mode can always be manually deactivated before its associated expiration date.
"},{"location":"users/op-modes/#operational-modes-management-api","title":"Operational Modes management API","text":"The SolarUser Instruction API can be used to toggle operational modes on and off. The EnableOperationalModes
instruction activates modes and DisableOperationalModes
deactivates them.
SolarNode supports placeholders in some setting values, such as datum data source IDs. These allow you to define a set of parameters that can be consistently applied to many settings.
For example, imagine you manage many SolarNode devices across different buildings or sites. You'd like to follow a naming convention for your datum data source ID values that include a code for the building the node is deployed in, along the lines of /BUILDING/DEVICE
. You could define a placeholder building
and then configure the source IDs like /{building}/device
. On each node you'd define the building
placeholder with a building-specific value, so at runtime the nodes would resolve actual source ID values with those names replacing the {building}
placeholder, for example /OFFICE1/meter
.
Placeholders are written using the form {name:default}
where name
is the placeholder name and default
is an optional default value to apply if no placeholder value exists for the given name. If a default value is not needed, omit the colon
so the placeholder becomes just {name}
.
For example, imagine a set of placeholder values like
Name Value building OFFICE1 room BREAKHere are some example settings with placeholders with what they would resolve to:
Input Resolved value/{building}/meter
/OFFICE1/meter
/{building}/{room}/temp
/OFFICE1/BREAK/temp
/{building}/{floor:1}/{room}
/OFFICE1/1/BREAK
"},{"location":"users/placeholders/#static-placeholder-configuration","title":"Static placeholder configuration","text":"SolarNode will look for placeholder values defined in properties files stored in the conf/placeholders.d
directory by default. In SolarNodeOS this is the /etc/solarnode/placeholders.d
directory.
Warning
These files are only loaded once, when SolarNode starts up. If you make changes to any of them then SolarNode must be restarted.
The properties file names must have a .properties
extension and follow Java properties file syntax. Put simply, each file contains lines like
name = value\n
where name
is the placeholder name and value
is its associated value. The example set of placeholder values shown previously could be defined in a /etc/solarnode/placeholders.d/mynode.properties
file with this content:
building = OFFICE1\nroom = BREAK\n
"},{"location":"users/placeholders/#dynamic-placeholder-configuration","title":"Dynamic placeholder configuration","text":"SolarNode also supports storing placeholder values as Settings using the key placeholder
. The SolarUser /instruction/add API can be used with the UpdateSetting topic to modify the placeholder values as needed. The type
value is the placeholder name and the value
the placeholder value. Placeholders defined this way have priority over any similarly-named placeholders defined statically. Changes take effect as soon as SolarNode receives and processes the instruction.
Warning
Once a placeholder value is set via the UpdateSetting
instruction, the same value defined as a static placeholder will be overridden and changes to the static value will be ignored.
For example, to set the floor
placeholder to 2
on node 123, you could make a POST
request to /solaruser/api/v1/sec/instr/add/UpdateSetting
with the following JSON body:
{\n\"nodeId\": 123,\n\"params\":{\n\"key\": \"placeholder\",\n\"type\": \"floor\",\n\"value\": \"2\"\n}\n}\n
Multiple settings can be updated as well, using a different syntax. Here's a request that sets both floor
to 2
and room
to MEET
:
{\"nodeId\":123,\"parameters\":[\n{\"name\":\"key\", \"value\":\"placeholder\"},\n{\"name\":\"type\", \"value\":\"floor\"},\n{\"name\":\"value\", \"value\":\"2\"},\n{\"name\":\"key\", \"value\":\"placeholder\"},\n{\"name\":\"type\", \"value\":\"room\"},\n{\"name\":\"value\", \"value\":\"MEET\"}\n]}\n
"},{"location":"users/remote-access/","title":"Remote Access","text":"SolarSSH is SolarNetwork's method of connecting to SolarNode devices over the internet even when those devices are not directly reachable due to network firewalls or routing rules. It uses the Secure Shell Protocol (SSH) to ensure your connection is private and secure.
SolarSSH does not maintain permanently open SSH connections to SolarNode devices. Instead the connections are established on demand, when you need them. This allows you to connect to a SolarNode when you need to perform maintenance, but not require SolarNode maintain an open SSH connection to SolarSSH.
In order to use SolarSSH, you will need a User Security Token to use for authentication.
"},{"location":"users/remote-access/#browser-connection","title":"Browser Connection","text":"You can use SolarSSH right in your browser to connect to any of your nodes.
The SolarSSH browser app
"},{"location":"users/remote-access/#choose-your-node-id","title":"Choose your node ID","text":"Click on the node ID in the page title to change what node you want to connect to.
Changing the SolarSSH node ID
Bookmark a SolarSSH page for your node ID
You can append a ?nodeId=X
to the SolarSSH browser URL https://go.solarnetwork.net/solarssh/, where X
is a node ID, to make the app start with that node ID directly. For example to start with node 123, you could bookmark the URL https://go.solarnetwork.net/solarssh/?nodeId=123.
Fill in User Security Token credentials for authentication. The node ID you are connecting to must be owned by the same account as the security token.
"},{"location":"users/remote-access/#connect","title":"Connect","text":"Click the Connect button to initiate the SolarSSH connection process. You will be presented with a dialog form to provide your SolarNodeOS system account credentials. This is only necessary if you want to connect to the SolarNodeOS system command line. If you only need to access the SolarNode Setup App, you can click the Skip button to skip this step. Otherwise, click the Login button to log into the system command line.
SolarNodeOS system account credentials form
SolarSSH will then establish the connection to your node. If you provided SolarNodeOS system account credentials previously and clicked the Login button, you will end up with a system command prompt, like this:
SolarSSH logged-in system command prompt
"},{"location":"users/remote-access/#remote-setup-app","title":"Remote Setup App","text":"Once connected, you can access the remote node's Setup App by clicking the Setup button in the top-right corner of the window. This will open a new browser tab for the Setup App.
Accessing the SolarNode Setup App through a SolarSSH web connection
"},{"location":"users/remote-access/#direct-connection","title":"Direct connection","text":"SolarSSH also supports a \"direct\" connection mode, that allows you to connect using standard ssh client applications. This is a more advanced (and flexible) way of connecting to your nodes, and even allows you to access other network services on the same network as the node and provides full SSH integration including port forwarding, scp
, and sftp
support.
Direct SolarSSH connections require using a SSH client that supports the SSH \"jump\" host feature. The \"jump\" server hosted by SolarNetwork Foundation is available at ssh.solarnetwork.net:9022
.
The \"jump\" connection user is formed by combining a node ID with a user security token, separated by a :
character. The general form of a SolarSSH direct connection \"jump\" host thus looks like this:
NODE:TOKEN@ssh.solarnetwork.net:9022\n
where NODE
is a SolarNode ID and TOKEN
is a SolarNetwork user security token.
The actual SolarNode user can be any OS user (typically solar
) and the hostname can be anything. A good practice for the hostname is to use one derived from the SolarNode ID, e.g. solarnode-123
.
Using OpenSSH a complete connection command to log in as a solar
user looks like this, passing the \"jump\" host via a -J
argument:
ssh -J 'NODE:TOKEN@ssh.solarnetwork.net:9022' solar@solarnode-NODE\n
Warning
SolarNetwork security tokens often contain characters that must be escaped with a \\
character for your shell to interpret them correctly. For example, a token like 9gPa9S;Ux1X3kK)YN6&g
might need to have the ;)&
characters escaped like 9gPa9S\\;Ux1X3kK\\)YN6\\&g
.
You will be first prompted to enter a password, which must be the token secret. You might then be prompted for the SolarNode OS user's password. Here's an example screen shot:
Accessing the SolarNode system command line through a SolarSSH direct connection
"},{"location":"users/remote-access/#shell-shortcut-function","title":"Shell shortcut function","text":"If you find yourself using SolarSSH connections frequently, a handy bash
or zsh
shell function can help make the connection process easier to remember. Here's an example that give you a solarssh
command that accepts a SolarNode ID argument, followed by any optional SSH arguments:
function solarssh () {\nlocal node_id=\"$1\"\nif [ -z \"$node_id\" ]; then\necho 'Must provide node ID , e.g. 123'\nelse\nshift\necho \"Enter SN token secret when first prompted for password. Enter node $node_id password second.\"\nssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \\\n-o LogLevel=ERROR -o NumberOfPasswordPrompts=1 \\\n-J \"$node_id\"'SN_TOKEN_HERE@ssh.solarnetwork.net:9022' \\\n$@ solar@solarnode-$node_id\nfi\n}\n
Just replace SN_TOKEN_HERE
with a user security token. After integrating this into your shell's configuration (e.g. ~/.bashrc
or ~/.zshrc
) then you could connect to node 123
like:
solarssh 123\n
"},{"location":"users/remote-access/#putty","title":"PuTTY","text":"PuTTY is a popular tool for Windows that supports SolarSSH connections. To connect to a SolarNode using PuTTY, you must:
ssh.solarnetwork.net:9022
using a username like NODE_ID:TOKEN_ID
and the corresponding token secret as the password.localhost:8080
to access the SolarNode Setup Appsolarnode-NODE_ID
on port 22
Open the Connection > Proxy configuration category in PuTTY, and configure the following settings:
Setting Value Proxy type SSH to proxy and use port forwarding Proxy hostnamessh.solarnetwork.net
Port 9022
Username The desired node ID, followed by a :
, followed by a user security token ID, that is: NODE_ID:TOKEN_ID
Password The user security token secret.
Confiruing PuTTY connection proxy settings
"},{"location":"users/remote-access/#putty-ssh-tunnel-configuration","title":"PuTTY SSH tunnel configuration","text":"To access the SolarNode Setup App, you can configure PuTTY to foward a port on your local machine to localhost:8080
on the node. Once the SSH connection is established, you can open a browser to http://localhost:PORT
to access the SolarNode Setup App. You can use any available local port, for example if you used port 8888
then you would open a browser to http://localhost:8888
to access the SolarNode Setup App.
Open the Connection > SSH > Tunnels configuration category in PuTTY, and configure the following settings:
Setting Value Source port A free port on your machine, for example8888
. Destination localhost:8080
Add You must click the Add button to add this tunnel. You can then add other tunnels as needed.
Confiruing PuTTY connection SSH tunnel settings
"},{"location":"users/remote-access/#putty-session-configuration","title":"PuTTY session configuration","text":"Finally under the Session configuration category in PuTTY, configure the Host Name and Port to connect to SolarNode. You can also provide a session name and click the Save button to save all the settings you have configured, making it easy to load them in the future.
Setting Value Host Name Does not actually matter, but a name likesolarnode-NODE_ID
is helpful, where NODE_ID
is the ID of the node you are connecting to. Port 22
Confiruing PuTTY session settings
"},{"location":"users/remote-access/#putty-open-connection","title":"PuTTY open connection","text":"On the Session configuration category, click the Open button to establish the SolarSSH connection. You might be prompted to confirm the identity of the ssh.solarnetwork.net
server first. Click the Accept button if this is the case.
PuTTY host verification alert
PuTTY will connect to SolarSSH and after a short while prompt you for the SolarNodeOS user you would like to connect to SolarNode with. Typically you would use the solar
account, so you would type solar
followed by Enter. You will then be prompted for that account's password, so type that in and type Enter again. You will then be presented with the SolarNodeOS shell prompt.
PuTTY node login
Assuming you configured a SSH tunnel on port 8888
to localhost:8080
, you can now open http://localhost:8888 to access the SolarNode Setup App.
Once connected to SolarSSH, access the SolarNode Setup App in your browser.
"},{"location":"users/security-tokens/","title":"Security Tokens","text":"Some SolarNode features require SolarNetwork Security Tokens to use as authentication credentails for SolarNetwork services. Security Tokens are managed on the Security Tokens page in SolarNetwork.
The Security Tokens page in SolarNetwork
"},{"location":"users/security-tokens/#user-tokens","title":"User Tokens","text":"User Security Tokens allow access to web services that perform functions directly on your behalf, for example issue an instruction to your SolarNode.
Click the \"+\" button in the User Tokens section to generate a new security token. You will be shown a form where you can give a name, description, and policy restrictions for the token.
The form for creating a new User Security Token
Click the Generate Security Token button to generate the new token. You will then be shown the generated token. You will need to copy and save the token to a safe and secure place.
A newly generated security token \u2014 make sure to save the token in a safe place
"},{"location":"users/security-tokens/#data-tokens","title":"Data Tokens","text":"Data Security Tokens allow access to web services that query the data collected by your SolarNodes.
Click the \"+\" button in the Data Tokens section to generate a new security token. You will be shown a form where you can give a name, description, and policy restrictions for the token.
The form for creating a new Data Security Token
Click the Generate Security Token button to generate the new token. You will then be shown the generated token. You will need to copy and save the token to a safe and secure place.
"},{"location":"users/security-tokens/#security-policy","title":"Security Policy","text":"Security tokens can be configured with a Security Policy that restricts the types of functions or data the token has permission to access.
Policy User Node Description API Paths Restrict the token to specific API methods. Expiry Make the token invalid after a specific date. Minimum Aggregation Restrict the data aggregation level allowed. Node IDs Restrict to specific node IDs. Refresh Allowed Make the token invalid after a specific date. Source IDs Restrict to specific datum source IDs. Node Metadata Restrict to specific node metadata. User Metadata Restrict to specific user metadata."},{"location":"users/security-tokens/#api-paths","title":"API Paths","text":"The API Paths policy restricts the token to specific SolarNet API methods, based on their URL path. If this policy is not included then all API methods are allowed.
"},{"location":"users/security-tokens/#expiry","title":"Expiry","text":"The Expiry policy makes the token invalid after a specific date. If this policy is not included, the token does not ever expire.
"},{"location":"users/security-tokens/#minimum-aggregation","title":"Minimum Aggregation","text":"The Minimum Aggregation policy restricts the token to a minimum data aggregation level. If this policy is not included, or of the minimum level is set to None, data for any aggregation level is allowed.
"},{"location":"users/security-tokens/#node-ids","title":"Node IDs","text":"The Node IDspolicy restrict the token to specific node IDs. If this policy is not included, then the token has access to all node IDs in your SolarNetwork account.
"},{"location":"users/security-tokens/#node-metadata","title":"Node Metadata","text":"The Node Metadata policy restricts the token to specific portions of node-level metadata. If this policy is not included then all node metadata is allowed.
"},{"location":"users/security-tokens/#refresh-allowed","title":"Refresh Allowed","text":"The Refresh Allowed policy grants applications given a signing key rather than the token's private password can refresh the key as long as the token has not expired.
"},{"location":"users/security-tokens/#source-ids","title":"Source IDs","text":"The Source IDs policy restrict the token to specific datum source IDs. If this policy is not included, then the token has access to all source IDs in your SolarNetwork account.
"},{"location":"users/security-tokens/#user-metadata","title":"User Metadata","text":"The User Metadata policy restricts the token to specific portions of account-level metadata. If this policy is not included then all user metadata is allowed.
"},{"location":"users/settings/","title":"Settings Files","text":"SolarNode plugins support configurable properties, called settings. The SolarNode setup app allows you to manage settings through simple web forms.
Settings can also be exported and imported in a CSV format, and can be applied when SolarNode starts up with Auto Settings CSV files. Here is an example of a settings form in the SolarNode setup app:
There are 3 settings represented in that screen shot:
Tip
Nearly every form field you can edit in the SolarNode setup app represents a setting for a component in SolarNode.
In the SolarNode setup app the settings can be imported and exported from the Settings > Backups screen in the Settings Backup & Restore section:
"},{"location":"users/settings/#settings-csv-example","title":"Settings CSV example","text":"Here's an example snippet of a settings CSV file:
net.solarnetwork.node.io.modbus.1,serialParams.baudRate,19200,0,2014-03-01 21:01:31\nnet.solarnetwork.node.io.modbus.1,serialParams.parityString,even,0,2014-03-01 21:01:31\nnet.solarnetwork.node.io.modbus.1,serialParams.portName,/dev/cuaU0,0,2014-03-01 21:01:31\nnet.solarnetwork.node.io.modbus.FACTORY,1,1,0,2014-03-01 21:00:31\n
These settings all belong to a net.solarnetwork.node.io.modbus
component. The meaning of the CSV columns is discussed in the following section.
Settings files are CSV (comma separated values) files, easily exported from spreadsheet applications like Microsoft Excel or Google Sheets. The CSV must include a header row, which is skipped. All other rows will be processed as settings.
The Settings CSV format uses a quite general format and contains the following columns:
# Name Description 1 key A unique identifier for the service the setting applies to. 2 type A unique identifier for the setting with the service specified bykey
, typically using standard property syntax. 3 value The setting value. 4 flags An integer bitmask of flags associated with the setting. See the flags section for more info. 5 modified The date the setting was last modified, in yyyy-MM-dd HH:mm:ss
format. To understand the key
and type
values required for a given component requires consulting the documentation of the plugin that provides that component. You can get a pretty good picture of what the values are by exporting the settings after configuring a component in SolarNode. Typically the key
value will mirror a plugin's Java package name, and type
follows a JavaScript-like property accessor syntax representing a configurable property on the component.
The type
setting value usually defines a component property using a JavaScript-like syntax with these rules:
name
a property named name
Nested property name.subname
a nested property subname
on a parent property name
List property name[0]
the first element of an indexed list property named name
Map property name['key']
the key
element of the map property name
These rules can be combined into complex expressions, for example propIncludes[0].name
or delegate.connectionFactory.propertyFilters['UID']
.
Each setting has a set of flags that can be associated with it. The following table outlines the bit offset for each flag along with a description:
# Name Description 0 Ignore modification date If this flag is set then changes to the associated setting will not trigger a new auto backup. 1 Volatile If this flag is set then changes to the associated setting will not trigger an internal \"setting changed\" event to be broadcast.Note these are bit offsets, so the decimal value to ignore modification date is 1
, to mark as volatile is 2
, and for both is 3
.
Many plugins provide component factories which allow you to configure any number of instances of that component. Each component instance is assigned a unique identifier when it is created. In the SolarNode setup app, the component instance identifiers appear throughout the UI:
In the previous example CSV the Modbus I/O plugin allows you to configure any number of Modbus connection components, each with their own specific settings. That is an example of a component factory. The settings CSV will include a special row to indicate that such a factory component should be activated, using a unique identifier, and then all the settings associated with that factory instance will have that unique identifier appended to its key
values.
Going back to that example CSV, this is the row that activates a Modbus I/O component instance with an identifier of 1
:
net.solarnetwork.node.io.modbus.FACTORY,1,1,0,2014-03-01 21:00:31\n
The synax for key
column is simply the service identifier followed by .FACTORY
. Then the type
and value
columns are both set the same unique identifier. In this example that identifier is 1
. For all settings specific to a factory component, the key
column will be the service identifier followed by .IDENTIFIER
where IDENTIFIER
is the unique instance identifier.
Here is an example that shows two factory instances configured: Lighting
and HVAC
. Each have a different serialParams.portName
setting value configured:
net.solarnetwork.node.io.modbus.Lighting,serialParams.portName,/dev/cuaU0,0,2014-03-01 21:01:31\nnet.solarnetwork.node.io.modbus.HVAC,serialParams.portName,/dev/ttyUSB0,0,2014-03-01 21:01:31\nnet.solarnetwork.node.io.modbus.FACTORY,Lighting,Lighting,0,2014-03-01 21:00:31\nnet.solarnetwork.node.io.modbus.FACTORY,HVAC,HVAC,0,2014-03-01 21:00:31\n
"},{"location":"users/settings/#auto-settings","title":"Auto settings","text":"SolarNode settings can also be configured through Auto Settings, applied when SolarNode starts up, by placing Settings CSV files in the /etc/solarnode/auto-settings.d
directory. These settings are applied only if they don't already exist or the modified date in the settings file is newer than the date they were previously applied.
SolarFlux is the name of a real-time cloud-based service for datum using a publish/subscribe integration model. SolarNode supports publishing datum to SolarFlux and your own applications can subscribe to receive datum messages as they are published.
SolarFlux is based on MQTT. To integrate with SolarFlux you use a MQTT client application or library. See the SolarFlux Integration Guide for more information.
"},{"location":"users/solarflux/#solarflux-upload-service","title":"SolarFlux Upload Service","text":"SolarNode provides the SolarFlux Upload Service plugin that posts datum captured by SolarNode plugins to SolarFlux.
"},{"location":"users/solarflux/#mqtt-message-format","title":"MQTT message format","text":"Each datum message is published as a CBOR encoded map by default, to a MQTT topic based on the datum's source ID. This is essentially a JSON object. The map keys are the datum property names. You can configure a Datum Encoder to encode datum into a different format, by configuring a filter. For example, the Protobuf Datum Encoder supports encoding datum into Protobuf messages.
Messages are published with the MQTT retained
flag set by default, which means the most recently published datum is saved by SolarFlux. When an application subscribes to a topic it will immediately receive any retained message for that topic. In this way, SolarFlux will provide a \"most recent\" snapshot of all datum across all nodes and sources.
{\n\"_DatumType\": \"net.solarnetwork.node.domain.ACEnergyDatum\",\n\"_DatumTypes\": [\n\"net.solarnetwork.node.domain.ACEnergyDatum\",\n\"net.solarnetwork.node.domain.EnergyDatum\",\n\"net.solarnetwork.node.domain.Datum\",\n\"net.solarnetwork.node.domain.GeneralDatum\"\n],\n\"apparentPower\": 2797,\n\"created\": 1545167905344,\n\"current\": 11.800409317016602,\n\"phase\": \"PhaseB\",\n\"phaseVoltage\": 409.89337158203125,\n\"powerFactor\": 1.2999000549316406,\n\"reactivePower\": -1996,\n\"realPower\": 1958,\n\"sourceId\": \"Ph2\",\n\"voltage\": 236.9553680419922,\n\"watts\": 1958\n}\n
"},{"location":"users/solarflux/#mqtt-message-topics","title":"MQTT message topics","text":"The MQTT topic each datum is published to is derived from the node ID and datum source ID, according to this pattern:
node/N/datum/A/S\n
Pattern Element Description N
The node ID the datum was captured on A
An aggregation key; will be 0
for the \"raw\" datum captured in SolarNode S
The datum source ID; note that any leading /
in the source ID is stripped from the topic Example MQTT topicsnode/1/datum/0/Meter\nnode/2/datum/0/Building1/Room1/Light1\nnode/2/datum/0/Building1/Room1/Light2\n
"},{"location":"users/solarflux/#log-datum-stream","title":"Log datum stream","text":"The EventAdmin
Appender is supported, and log events are turned into a datum stream and published to SolarFlux. The log timestamps are used as the datum timestamps.
The topic assigned to log events is log/
with the log name appended. Period characters (.
) in the log name are replaced with slash characters (/
). For example, a log name net.solarnetwork.node.datum.modbus.ModbusDatumDataSource
will be turned into the topic log/net/solarnetwork/node/datum/modbus/ModbusDatumDataSource
.
The datum stream consists of the following properties:
Property Class. Type Descriptionlevel
s
String The log level name, e.g. TRACE
, DEBUG
, INFO
, WARN
, ERROR
, or FATAL
. priority
i
Integer The log level priority (lower values have more priority), e.g. 600
, 500
, 400
, 300
, 200
, or 100
. name
s
String The log name. msg
s
String The log message . exMsg
s
String An exception message, if an exception was included. exSt
s
String A newline-delimited list of stack trace element values, if an exception was included."},{"location":"users/solarflux/#settings","title":"Settings","text":"The SolarFlux Upload Service ships with default settings that work out-of-the-box without any configuration. There are many settings you can change to better suit your needs, however.
Each component configuration contains the following overall settings:
Setting Description Host The URI for the SolarFlux server to connect to. Normally this isinflux.solarnetwork.net:8884
. Username The MQTT username to use. Normally this is solarnode
. Password The MQTT password to use. Normally this is not needed as the node's certificate it used for authentication. Exclude Properties A regular expression to match property names on all datum sources to exclude from publishing. Required Mode If configured, an operational mode that must be active for any data to be published. Maximum Republish If offline message persistence has been configured, then the maximum number of offline messages to publish in one go. See the offline persistence section for more information. Reliability The MQTT quality of service level to use. Normally the default of At most once is sufficient. Version The MQTT protocol version to use. Startig with version 5 MQTT topic aliases will be used if the server supports it, which can save a significant amount of network bandwidth when long source IDs are in use. Retained Toggle the MQTT retained message flag. When enabled the MQTT server will store the most recently published message on each topic so it is immediately available when clients connect. Wire Logging Toggle verbose logging on/off to support troubleshooting. The messages are logged to the net.solarnetwork.mqtt
topic at DEBUG
level. Filters Any number of datum filter configurations. For TLS-encrypted connections, SolarNode will make the node's own X.509 certificate available for client authentication.
"},{"location":"users/solarflux/#filter-settings","title":"Filter settings","text":"Each component can define any number of filters, which are used to manipulate the datum published to SolarFlux, such as:
The filter settings can be very useful to constrain how much data is sent to SolarFlux, for example on nodes using mobile internet connections where the cost of posting data is high.
A filter can configure a Datum Encoder to encode the MQTT message with, if you want to use a format other than the default CBOR encoding. This can be combined with a Source ID pattern to encode specific sources with specific encoders. For example when using the Protobuf Datum Encoder a single Protobuf message type is supported per encoder. If you want to encode different datum sources into different Protobuf messages, you would configure one encoder per message type, and then one filter per source ID with the corresponding encoder.
Note
All filters are applied in the order they are defined, and then the first filter with a Datum Encoder configured that matches the filter's Source ID pattern will be used to encode the datum. If not Datum Encoder is configured the default CBOR encoding will be used.
Each filter configuration contains the following settings:
Setting Description Source ID A case-insensitive regular expression to match against datum source IDs. If defined, this filter will only be applied to datum with matching source ID values. If not defined this filter will be applied to all datum. For example^solar
would match any source ID starting with solar. Datum Filter The Service Name of a Datum Filter component to apply before encoding and posting datum. Required Mode If configured, an operational mode that must be active for this filter to be applied. Datum Encoder The Service Name if a Datum Encoder component to encode datum with. The encoder will be passed a java.util.Map
object with all the datum properties. If not configured then CBOR will be used. Limit Seconds The minimum number of seconds to limit datum that match the configured Source ID pattern. If datum are produced faster than this rate, they will be filtered out. Set to 0
or leave empty for no limit. Property Includes A list of case-insensitive regular expressions to match against datum property names. If configured, only properties that match one of these expressions will be included in the filtered output. For example ^watt
would match any property starting with watt. Property Excludes A list of case-insensitive regular expressions to match against datum property names. If configured, any property that match one of these expressions will be excluded from the filtered output. For example ^temp
would match any property starting with temp. Exclusions are applied after property inclusions. Warning
The datum sourceId
and created
properties will be affected by the property include/exclude filters! If you define any include filters, you might want to add an include rule for ^created$
. You might like to have sourceId
removed to conserve bandwidth, given that value is part of the MQTT topic the datum is posted on and thus redundant.
By default if the connection to the SolarFlux server is down for any reason, all messages that would normally be published to the server will be discarded. This is suitable for most applications that rely on SolarFlux to view real-time status updates only, and SolarNode uploads datum to SolarNet for long-term persistence. For applications that rely on SolarFlux for more, it might be desirable to configure SolarNode to locally cache SolarFlux messages when the connection is down, and then publish those cached messages when the connection is restored. This can be accomplished by deploying the MQTT Persistence plugin.
When that plugin is available, all messages processed by this service will be saved locally when the MQTT connection is down, and then posted once the MQTT connection comes back up. Note the following points to consider:
false
.TODO
"},{"location":"users/datum-filters/","title":"Datum Filters","text":"Datum Filters are services that manipulate datum generated by SolarNode plugins before they are uploaded to SolarNet. Datum Filters vary wildly in the functionality they provide; here are some examples of the things they can do:
Datum Filters do not create datum
It is helpful to remember that Datum Filters do not create datum, they only manipulate datum created elsewhere, typically by datum data sources.
There are four main places where datum filters can be applied:
All datum generated by SolarNode plugins are added to the Datum Queue for processing. The datum are processed in the order they are added to the queue. Datum Filters are applied to each datum, each filter's result passed to the next available filter until all filters have been applied.
Conceptual diagram of the Datum Queue, processing datum along with filters manipulating them
At the end of processing, the datum is either
Most of the time datum are uploaded to SolarNet immediately after processing. If the network is down, or SolarNode is configured to only upload datum in batches, then datum are saved locally in SolarNode, and a periodic job will attempt to upload them later on, in batches.
See the Setup App Datum Queue section for information on how to configure the Datum Queue.
When to configure filters on the Datum Queue, as opposed to other places?
The Datum Queue is a great place to configure filters that must be processed at most once per datum, and do not depend on what time the datum is uploaded to SolarNet.
"},{"location":"users/datum-filters/#global-datum-filters","title":"Global Datum Filters","text":"Global Datum Filters are applied to datum just before posting to SolarNetwork. Once an instance is created, it is automatically active and will be applied to datum. This differs from User Datum Filters, which must be explicitly added to a service to be used, either dircectly or indirectly with a Datum Filter Chain.
Note
Some filters support both Global and User based filter configuration, and often you can achieve the same overall result in multiple ways. Global filters are convenient for the subset of filters that support Global configuration, but for complex filtering often it can be easier to configure all filters as User filters, using the Global Datum Filter Chain as needed.
"},{"location":"users/datum-filters/#global-datum-filter-chain","title":"Global Datum Filter Chain","text":"The Global Datum Filter Chain provides a way to apply explicit User Datum Filters to datum just before posting to SolarNetwork.
"},{"location":"users/datum-filters/#solarflux-datum-filters","title":"SolarFlux Datum Filters","text":"TODO
"},{"location":"users/datum-filters/chain/","title":"Filter Chain","text":"The Datum Filter Chain is a User Datum Filter that you configure with a list, or chain, of other User Datum Filters. When the Filter Chain executes, it executes each of the configured Datum Filters, in the order defined. This filter can be used like any other Datum Filter, allowing multiple filters to be applied in a defined order.
A Filter Chain acts like an ordered group of Datum Filters
Tip
Some services support configuring only a single Datum Filter setting. You can use a Filter Chain to apply multiple filters in those services.
"},{"location":"users/datum-filters/chain/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Available Filters A read-only list of Service Name values of User Datum Filter components that have been configured. You can copy any value from this list and paste it into the Datum Filters list to include that filter in the chain. Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Datum Filters The list of Service Name values of User Datum Filter components to apply to datum."},{"location":"users/datum-filters/control-updater/","title":"Control Updater Datum Filter","text":"The Control Updater Datum Filter provides a way to update controls with the result of an expression, optionally populating the expression result as a datum property.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/control-updater/#settings","title":"Settings","text":" The screen shot shows a filter that would toggle the /power/switch/1
control on/off based on the frequency
property in the /power/1
datum stream: on when the frequency is 50 or higher, off otherwise.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Control Configurations A list of control expression configurations. Each control configuration contains the following settings:
Setting Description Control ID The ID of the control to update with the expression result. Property The optional datum property to store the expression result in. Property Type The datum property type to use. Expression The expression to evaluate. See below for more info. Expression Language The expression language to write Expression in."},{"location":"users/datum-filters/control-updater/#expressions","title":"Expressions","text":"See the Expressions guide for general expressions reference. The root object is a DatumExpressionRoot
that lets you treat all datum properties, and filter parameters, as expression variables directly.
The Downsample Datum Filter provides a way to down-sample higher-frequency datum samples into lower-frequency (averaged) datum samples. The filter will collect a configurable number of samples and then generate a down-sampled sample where an average of each collected instantaneous property is included. In addition minimum and maximum values of each averaged property are added.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/downsample/#settings","title":"Settings","text":"Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Sample Count The number of samples to average over. Decimal Scale A maximum number of digits after the decimal point to round to. Set to0
to round to whole numbers. Property Excludes A list of property names to exclude. Min Property Template A string format to use for computed minimum property values. Use %s
as the placeholder for the original property name, e.g. %s_min
. Max Property Template A string format to use for computed maximum property values. Use %s
as the placeholder for the original property name, e.g. %s_max
."},{"location":"users/datum-filters/expression/","title":"Expression Datum Filter","text":"The Expression Datum Filter provides a way to generate new properties by evaluating expressions against existing properties.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/expression/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Expressions A list of expression configurations that are evaluated to derive datum property values from other property values. Use the + and - buttons to add/remove expression configurations.
"},{"location":"users/datum-filters/expression/#expression-settings","title":"Expression settings","text":"Each expression configuration contains the following settings:
Setting Description Property The datum property to store the expression result in. Property Type The datum property type to use. Expression The expression to evaluate. See below for more info. Expression Language The expression language to write Expression in."},{"location":"users/datum-filters/expression/#expressions","title":"Expressions","text":"See the SolarNode Expressions guide for general expressions reference. The root object is a DatumExpressionRoot
that lets you treat all datum properties, and filter parameters, as expression variables directly, along with the following properties:
datum
Datum
A Datum
object, populated with data from all property and virtual meter configurations. props
Map<String,Object>
Simple Map based access to the properties in datum
, and transform parameters, to simplify expressions. The following methods are available:
Function Arguments Result Descriptionhas(name)
String
boolean
Returns true
if a property named name
is defined. hasLatest(source)
String
boolean
Returns true
if a datum with source ID source
is available via the latest(source)
function. latest(source)
String
DatumExpressionRoot
for the latest available datum matching the given source ID, or null
if not available."},{"location":"users/datum-filters/expression/#expression-examples","title":"Expression examples","text":"Assuming a datum sample with properties like the following:
Property Valuecurrent
7.6
voltage
240.1
status
Error
Then here are some example expressions and the results they would produce:
Expression Result Commentvoltage * current
1824.76
Simple multiplication of two properties. props['voltage'] * props['current']
1824.76
Another way to write the previous expression. Can be useful if the property names contain non-alphanumeric characters, like spaces. has('frequency') ? 1 : null
null
Uses the ?:
if/then/else operator to evaluate to null
because the frequency
property is not available. When an expression evaluates to null
then no property will be added to the output samples. current > 7 or voltage > 245 ? 1 : null
1
Uses comparison and logic operators to evaluate to 1
because current
is greater than 7
. voltage * currrent * (hasLatest('battery') ? 1.0 - latest('battery')['soc'] : 1)
364.952
Assuming a battery
datum with a soc
property value of 0.8
then the expression resolves to 7.6 * 241.0 * (1.0 - 0.8)
."},{"location":"users/datum-filters/join/","title":"Join Datum Filter","text":"The Join Datum Filter provides a way to merge the properties of multiple datum streams into a new derived datum stream.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/join/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Output Source ID The source ID of the merged datum stream. Placeholders are allowed. Coalesce Threshold When 2
or more then wait until datum from this many different source IDs have been encountered before generating an output datum. Once a coalesced datum has been generated the tracking of input sources resets and another datum will only be generated after the threshold is met again. If 1
or less, then generate output datum for all input datum. Swallow Input If enabled, then filter out input datum after merging. Otherwise leave the input datum as-is. Source Property Mappings A list of source IDs with associated property name templates to rename the properties with. Each template must contain a {p}
parameter which will be replaced by the property names merged from datum encountered with the associated source ID. For example {p}_s1
would map an input property watts
to watts_s1
. Use the + and - buttons to add/remove expression configurations.
"},{"location":"users/datum-filters/join/#source-property-mappings-settings","title":"Source Property Mappings settings","text":"Each source property mapping configuration contains the following settings:
Setting Description Source ID A source ID pattern to apply the associated Mapping to. Any capture groups (parts of the pattern between()
groups) are provided to the Mapping template. Mapping A property name template with a {p}
parameter for an input property name to be mapped to a merged (output) property name. Pattern capture groups from Source ID are available starting with {1}
. For example {p}_s1
would map an input property watts
to watts_s1
. Unmapped properties are copied
If a matching source property mapping does not exist for an input datum source ID then the property names of that datum are used as-is.
"},{"location":"users/datum-filters/join/#source-mapping-examples","title":"Source mapping examples","text":"The Source ID pattern can define capture groups that will be provided to the Mapping template as numbered parameters, starting with {1}
. For example, assuming an input datum property watts
, then:
/power/main
/power/
{p}_main
watts_main
/power/1
/power/(\\d+)$
{p}_s{1}
watts_s1
/power/2
/power/(\\d+)$
{p}_s{1}
watts_s2
/solar/1
/(\\w+)/(\\d+)$
{p}_{1}{2}
watts_solar1
To help visualize property mapping with a more complete example, let's imagine we have some datum streams being collected and the most recent datum from each look like this:
/meter/1 /meter/2 /solar/1{\n \"watts\": 3213,\n}
{\n \"watts\": -842,\n}
{\n \"watts\" : 4055,\n \"current\": 16.89583\n}
Here are some examples of how some source mapping expressions could be defined, including how multiple mappings can be used at once:
Source ID Patterns Mappings Result/(\\w+)/(\\d+)
{1}_{p}{2}
{\n \"power_watts1\" : 3213,\n \"power_watts2\" : -842,\n \"solar_watts1\" : 4055,\n \"solar_current\" : 16.89583\n}
/power/(\\d+)
/solar/1
{p}_{1}
{p}
{\n \"watts_1\" : 3213,\n \"watts_2\" : -842,\n \"watts\" : 4055,\n \"current\" : 16.89583\n}"},{"location":"users/datum-filters/op-mode/","title":"Operational Mode Datum Filter","text":"
The Operational Mode Datum Filter provides a way to evaluate expressions to toggle operational modes. When an expression evaluates to true
the associated operational mode is activated. When an expression evaluates to false
the associated operational mode is deactivated.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/op-mode/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Expressions A list of expression configurations that are evaluated to toggle operational modes. Use the + and - buttons to add/remove expression configurations.
"},{"location":"users/datum-filters/op-mode/#expression-settings","title":"Expression settings","text":"Each expression configuration contains the following settings:
Setting Description Mode The operational mode to toggle. Expire Seconds If configured and greater than0
, the number of seconds after activating the operational mode to automatically deactivate it. If not configured or 0
then the operational mode will be deactivated when the expression evaluates to false
. See below for more information. Property If configured, the datum property to store the expression result in. See below for more information. Property Type The datum property type to use if Property is configured. See below for more information. Expression The expression to evaluate. See below for more info. Expression Language The expression language to write Expression in."},{"location":"users/datum-filters/op-mode/#expire-setting","title":"Expire setting","text":"When configured the expression will never deactivate the operational mode directly. When evaluating the given expression, if it evaluates to true
the mode will be activated and configured to deactivate after this many seconds. If the operation mode was already active, the expiration will be extended by this many seconds.
This configuration can be thought of like a time out as used on motion-detecting lights: each time motion is detected the light is turned on (if not already on) and a timer set to turn the light off after so many seconds of no motion being detected.
Note that the operational modes service might actually deactivate the given mode a short time after the configured expiration.
"},{"location":"users/datum-filters/op-mode/#property-setting","title":"Property setting","text":"A property does not have to be populated. If you provide a Property name to populate, the value of the datum property depends on property type configured:
Type Description Instantaneous The property value will be1
or 0
based on true
and false
expression results. Status The property will be the expression result, so true
or false
. Tag A tag named as the configured property will be added if the expression is true
, or removed if false
."},{"location":"users/datum-filters/op-mode/#expressions","title":"Expressions","text":"See the Expressions section for general expressions reference. The expression must evaluate to a boolean (true
or false
) result. When it evaluates to true
the configured operational mode will be activated. When it evaluates to false
the operational mode will be deactivated (unless an expire setting has been configured).
The root object is a datum samples expression object that lets you treat all datum properties, and filter parameters, as expression variables directly, along with the following properties:
Property Type Descriptiondatum
GeneralNodeDatum
A GeneralNodeDatum
object, populated with data from all property and virtual meter configurations. props
Map<String,Object>
Simple Map based access to the properties in datum
, and transform parameters, to simplify expressions. The following methods are available:
Function Arguments Result Descriptionhas(name)
String
boolean
Returns true
if a property named name
is defined."},{"location":"users/datum-filters/op-mode/#expression-examples","title":"Expression examples","text":"Assuming a datum sample with properties like the following:
Property Valuecurrent
7.6
voltage
240.1
status
Error
Then here are some example expressions and the results they would produce:
Expression Result Commentvoltage * current > 1800
true
Since voltage * current
is 1824.76, the expression is true
. status != 'Error'
false
Since status
is Error
the expression is false
."},{"location":"users/datum-filters/parameter-expression/","title":"Parameter Expression Datum Filter","text":"The Parameter Expression Datum Filter provides a way to generate filter parameters by evaluating expressions against existing properties. The generated parameters will be available to any further datum filters in the same filter chain.
Tip
Parameters are useful as temporary variables that you want to use during datum processing but do not want to include as datum properties that get posted to SolarNet.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/parameter-expression/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Expressions A list of expression configurations that are evaluated to derive parameter values from other property values. Use the + and - buttons to add/remove expression configurations.
"},{"location":"users/datum-filters/parameter-expression/#expression-settings","title":"Expression settings","text":"Each expression configuration contains the following settings:
Setting Description Parameter The filter parameter name to store the expression result in. Expression The expression to evaluate. See below for more info. Expression Language The [expression language][expr-lang] to write Expression in."},{"location":"users/datum-filters/parameter-expression/#expressions","title":"Expressions","text":"See the Expressions section for general expressions reference. This filter supports Datum Expressions that lets you treat all datum properties, and filter parameters, as expression variables directly.
"},{"location":"users/datum-filters/property/","title":"Property Datum Filter","text":"The Property Datum Filter provides a way to remove properties of datum. This can help if some component generates properties that you don't actually need to use.
For example you might have a plugin that collects data from an AC power meter that capture power, energy, quality, and other properties each time a sample is taken. If you are only interested in capturing the power and energy properties you could use this component to remove all the others.
This component can also throttle individual properties over time, so that individual properties are posted less frequently than the rate the whole datum it is a part of is sampled at. For example a plugin for an AC power meter might collect datum once per minute, and you want to collect the energy properties of the datum every minute but the quality properties only once every 10 minutes.
The general idea for filtering properties is to configure rules that define which datum sources you want to filter, along with a list of properties to include and/or a list to exclude. All matching is done using regular expressions, which can help make your rules concise.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/property/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Property Includes A list of property names to include, removing all others. This is a list of case-insensitive patterns to match against datum property names. If any inclusion patterns are configured then only properties matching one of these patterns will be included in datum. Any property name that does not match one of these patterns will be removed. Property Excludes A list of property names to exclude. This is a list of case-insensitive patterns to match against datum property names. If any exclusion expressions are configured then any property that matches one of these expressions will be removed. Exclusion epxressions are processed after inclusion expressions when both are configured. Use the + and - buttons to add/remove property include/exclude patterns.
Each property inclusion setting contains the following settings:
Setting Description Name The property name pattern to include. Limit Seconds A throttle limit, in seconds, to apply to included properties. The minimum number of seconds to limit properties that match the configured property inclusion pattern. If properties are produced faster than this rate, they will be filtered out. Leave empty (or0
) for no throttling."},{"location":"users/datum-filters/split/","title":"Split Datum Filter","text":"The Split Datum Filter provides a way to split the properties of a datum stream into multiple new derived datum streams.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/split/#settings","title":"Settings","text":"In the example screen shot shown above, the /power/meter/1
datum stream is split into two datum streams: /meter/1/power
and /meter/1/energy
. Properties with names containing current
, voltage
, or power
(case-insensitive) will be copied to /meter/1/power
. Properties with names containing hour
(case-insensitive) will be copied to /meter/1/energy
.
Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Swallow Input If enabled, then discard input datum after splitting. Otherwise leave the input datum as is. Property Source Mappings A list of property name regular expression with associated source IDs to copy matching properties to."},{"location":"users/datum-filters/split/#property-source-mappings-settings","title":"Property Source Mappings settings","text":"Use the + and - buttons to add/remove Property Source Mapping configurations.
Each property source mapping configuration contains the following settings:
Setting Description Property A property name case-sensitive regular expression to match on the input datum stream. You can enable case-insensitive matching by including a(?i)
prefix. Source ID The destination source ID to copy the matching properties to. Supports placeholders. Tip
If multiple property name expressions match the same property name, that property will be copied to all the datum streams of the associated source IDs.
"},{"location":"users/datum-filters/tariff/","title":"Time-based Tariff Datum Filter","text":"The Tariff Datum Filter provides a way to inject time-based tariff rates based on a flexible tariff schedule defined with various time constraints.
This filter is provided by the Tariff Filter plugin.
"},{"location":"users/datum-filters/tariff/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Metadata Service The Service Name of the Metadata Service to obtain the tariff schedule from. See below for more information. Metadata Path The metadata path that will resolve the tariff schedule from the configured Metadata Service. Language An IETF BCP 47 language tag to parse the tariff data with. If not configured then the default system language will be assumed. First Match If enabled, then apply only the first tariff that matches a given datum date. If disabled, then apply all tariffs that match. Schedule Cache The amount of seconds to cache the tariff schedule obtained from the configured Metadata Service. Tariff Evaluator The Service Name of a Time-based Tariff Evaluator service to evaluate each tariff to determine if it should apply to a given datum. If not configured a default algorithm is used that matches all non-empty constraints in an inclusive manner, except for the time-of-day constraint which uses an exclusive upper bound."},{"location":"users/datum-filters/tariff/#metadata-service","title":"Metadata Service","text":"SolarNode provides a User Metadata Service component that this filter can use for the Metadata Service setting. This allows you to configure the tariff schedule as user metadata in SolarNetwork and then SolarNode will download the schedule and use it as needed.
You must configure a SolarNetwork security token to use the User Metadata Service. We recommend that you create a Data security token in SolarNetwork with a limited security policy that includes an API Path of just /users/meta
and a User Metadata Path of something granular like /pm/tariffs/**
. This will give SolarNode access to just the tariff metadata under the /pm/tariffs
metadata path.
The SolarNetwork API Explorer can be used to add the necessary tariff schedule metadata to your account. For example:
"},{"location":"users/datum-filters/tariff/#tariff-schedule-format","title":"Tariff schedule format","text":"The tariff schedule obtained from the configured Metadata Service uses a simple CSV-based format that can be easily exported from a spreadsheet. Each row represents a rule that includes:
Include a header row
A header row is required because the tariff rate names are defined there. The first 4 column names are ignored.
The schedule consists of 4 time constraint columns followed by one or more tariff rate columns. Each constraint is represented as a range, in the form start - end
. Whitespace is allowed around the -
character. If the start
and end
are the same, the range may be shortened to just start
. A range can be left empty to represent all values. The time constraint columns are:
1
and Sunday being 7
, or abbreviations (Mon-Sun) or full names (Monday - Sunday). When using text names case does not matter and they will be parsed using the Lanauage setting. 4 Time range An inclusive - exclusive time-of-day range. The time can be specified as whole hour numbers (0-24) or HH:MM
style (00:00
- 24:00
). Starting on column 5 of the tariff schedule are arbitrary rate values to add to datum when the corresponding constraints are satisfied. The name of the datum property is derived from the header row of the column, adapted according to the following rules:
Here are some examples of the header name to the equivalent property name:
Rate Header Name Datum Property Name TOUtou
Foo Bar foo_bar
This Isn't A Great Name! this_isn_t_a_great_name
"},{"location":"users/datum-filters/tariff/#example-schedule","title":"Example schedule","text":"Here's an example schedule with 4 rules and a single TOU rate (the *
stands for all values):
In CSV format the schedule would look like this:
Month,Day,Weekday,Time,TOU\nJan-Dec,,Mon-Fri,0-8,10.48\nJan-Dec,,Mon-Fri,8-24,11.00\nJan-Dec,,Sat-Sun,0-8,9.19\nJan-Dec,,Sat-Sun,8-24,11.21\n
When encoding into SolarNetwork metadata JSON, that same schedule would look like this when saved at the /pm/tariffs/schedule
path:
{\n\"pm\": {\n\"tariffs\": {\n\"schedule\": \"Month,Day,Weekday,Time,TOU\\nJan-Dec,,Mon-Fri,0-8,10.48\\nJan-Dec,,Mon-Fri,8-24,11.00\\nJan-Dec,,Sat-Sun,0-8,9.19\\nJan-Dec,,Sat-Sun,8-24,11.21\"\n}\n}\n}\n
"},{"location":"users/datum-filters/throttle/","title":"Throttle Datum Filter","text":"The Throttle Datum Filter provides a way to throttle entire datum over time, so that they are posted to SolarNetwork less frequently than a plugin that collects the data produces them. This can be useful if you need a plugin to collect data at a high frequency for use internally by SolarNode but don't need to save such high resolution of data in SolarNetwork. For example, a plugin that monitors a device and responds quickly to changes in the data might be configured to sample data every second, but you only want to capture that data once per minute in SolarNetwork.
The general idea for filtering datum is to configure rules that define which datum sources you want to filter, along with time limit to throttle matching datum by. Any datum matching the sources that are captured faster than the time limit will filtered and not uploaded to SolarNetwork.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/throttle/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Limit Seconds A throttle limit, in seconds, to apply to matching datum. The throttle limit is applied to datum by source ID. Before each datum is uploaded to SolarNetwork, the filter will check how long has elapsed since a datum with the same source ID was uploaded. If the elapsed time is less than the configured limit, the datum will not be uploaded."},{"location":"users/datum-filters/unchanged-property/","title":"Unchanged Property Filter","text":"The Unchanged Property Filter provides a way to discard individual datum properties that have not changed within a datum stream.
This filter is provided by the Standard Datum Filters plugin.
Tip
See the Unchanged Datum Filter for a filter that can discard entire unchanging datum (at the source ID level).
"},{"location":"users/datum-filters/unchanged-property/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Default Unchanged Max Seconds When greater than 0
then the maximum number of seconds to discard unchanged properties within a single datum stream (source ID). Use this setting to ensure a property is included occasionally, even if the property value has not changed. Having at least one value per hour in a datum stream is recommended. This time period is always relative to the last unfiltered property within a given datum stream seen by the filter. Property Configurations A list of property settings."},{"location":"users/datum-filters/unchanged-property/#property-settings","title":"Property Settings","text":"Use the + and - buttons to add/remove Property configurations.
Each property source mapping configuration contains the following settings:
Setting Description Property A regular expression pattern to match against datum property names. All matching properties will be filtered. Unchanged Max Seconds When greater than0
then the maximum number of seconds to discard unchanged properties within a single datum stream (source ID). This can be used to override the filter-wide Default Unchanged Max Seconds setting, or left blank to use the default value."},{"location":"users/datum-filters/unchanged/","title":"Unchanged Datum Filter","text":"The Unchanged Datum Filter provides a way to discard entire datum that have not changed within a datum stream.
This filter is provided by the Standard Datum Filters plugin.
Tip
See the Unchanged Property Filter for a filter that can discard individual unchanging properties within a datum stream.
"},{"location":"users/datum-filters/unchanged/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Unchanged Max Seconds When greater than 0
then the maximum number of seconds to refrain from publishing an unchanged datum within a single datum stream. Use this setting to ensure a datum is included occasionally, even if the datum properties have not changed. Having at least one value per hour in a datum stream is recommended. This time period is always relative to the last unfiltered property within a given datum stream seen by the filter. Property Pattern A property name pattern that limits the properties monitored for changes. Only property names that match this expression will be considered when determining if a datum differs from the previous datum within the datum stream."},{"location":"users/datum-filters/virtual-meter/","title":"Virtual Meter Datum Filter","text":"The Virtual Meter Datum Filter provides a way to derive an accumulating \"meter reading\" value out of an instantaneous property value over time. For example, if you have an irradiance sensor that allows you to capture instantaneous W/m2 power values, you could configure a virtual meter to generate Wh/m2 energy values.
Each virtual meter works with a single input datum property, typically an instantaneous property. The derived accumulating datum property will be named after that property with the time unit suffix appended. For example, an instantaneous irradiance
property using the Hours
time unit would result in an accumulating irradianceHours
property. The value is calculated as an average between the current and the previous instantaneous property values, multiplied by the amount of time that has elapsed between the two samples.
This filter is provided by the Standard Datum Filters plugin.
"},{"location":"users/datum-filters/virtual-meter/#settings","title":"Settings","text":"Each filter configuration contains the following overall settings:
Setting Description Service Name A unique ID for the filter, to be referenced by other components. Service Group An optional service group name to assign. Source ID A case-insensitive pattern to match the input source ID(s) to filter. If omitted then datum for all source ID values will be filtered, otherwise only datum with matching source ID values will be filtered. Required Mode If configured, an operational mode that must be active for this filter to be applied. Required Tag Only apply the filter on datum with the given tag. A tag may be prefixed with!
to invert the logic so that the filter only applies to datum without the given tag. Multiple tags can be defined using a ,
delimiter, in which case at least one of the configured tags must match to apply the filter. Virtual Meters Configure as many virtual meters as you like, using the + and - buttons to add/remove meter configurations."},{"location":"users/datum-filters/virtual-meter/#virtual-meter-settings","title":"Virtual Meter Settings","text":"The Virtual Meter settings define a single virutal meter.
Setting Description Property The name of the input datum property to derive the virtual meter values from. Property Type The type of the input datum property. Typically this will beInstantaneous
but when combined with an expression an Accumulating
property can be used. Reading Property The name of the output meter accumulating datum property to generate. Leave empty for a default name derived from Property and Time Unit. For example, an instantaneous irradiance
property using the Hours
time unit would result in an accumulating irradianceHours
property. Time Unit The time unit to record meter readings as. This value affects the name of the virtual meter reading property if Reading Property is left blank: it will be appended to the end of Property Name. It also affects the virtual meter output reading values, as they will be calculated in this time unit. Max Age The maximum time allowed between samples where the meter reading can advance. In case the node is not collecting samples for a period of time, this setting prevents the plugin from calculating an unexpectedly large reading value jump. For example if a node was turned off for a day, the first sample it captures when turned back on would otherwise advance the reading as if the associated instantaneous property had been active over that entire time. With this restriction, the node will record the new sample date and value, but not advance the meter reading until another sample is captured within this time period. Decimal Scale A maximum number of digits after the decimal point to round to. Set to 0
to round to whole numbers. Track Only On Change When enabled, then only update the previous reading date if the new reading value differs from the previous one. Rolling Average Count A count of samples to average the property value from. When set to something greater than 1
, then apply a rolling average of this many property samples and output that value as the instantaneous source property value. This has the effect of smoothing the instantaneous values to an average over the time period leading into each output sample. Defaults to 0
so no rolling average is applied. Add Instantaneous Difference When enabled, then include an output instantaneous property of the difference between the current and previous reading values. Instantaneous Difference Property The derived output instantaneous datum property name to use when Add Instantaneous Difference is enabled. By default this property will be derived from the Reading Property value with Diff
appended. Reading Value You can reset the virtual meter reading value with this setting. Note this is an advanced operation. If you submit a value for this setting, the virtual meter reading will be reset to this value such that the next datum the reading is calculated for will use this as the current meter reading. This will impact the datum stream's reported aggregate values, so you should be very sure this is something you want to do. For example if the virtual meter was at 1000
and you reset it 0
then that will appear as a -1000
drop in whatever the reading is measuring. If this occurs you can create a Reset
Datum auxiliary record to accomodate the reset value. Expressions Configure as many expressions as you like, using the + and - buttons to add/remove expression configurations."},{"location":"users/datum-filters/virtual-meter/#virtual-meter-expression-settings","title":"Virtual Meter Expression Settings","text":"A virtual meter can use expressions to customise how the output meter value reading value is calculated. See the Expressions section for more information.
Setting Description Property The datum property to store the expression result in. This must match the Reading Property of a meter configuration. Keep in mind that if Reading Property is blank, the implied value is derived from Property and Time Unit. Property Type The datum property type to use. Expression The expression to evaluate. See below for more info. Expression Language The expression language to write Expression in."},{"location":"users/datum-filters/virtual-meter/#filter-parameters","title":"Filter parameters","text":"When the virtual meter filter is applied to a given datum, it will generate the following filter parameters, which will be available to other filters that are applied to the same datum after this filter.
Parameter Description{inputPropertyName}_diff
The difference between the current input property value and the previous input property value. The {inputPropertyName}
part of the parameter name will be replaced by the actual input property name. For example irradiance_diff
. {meterPropertyName}_diff
The difference between the current output meter property value and the previous output meter property value. The {meterPropertyName}
part of the parameter name will be replaced by the actual output meter property name. For example irradianceHours_diff
."},{"location":"users/datum-filters/virtual-meter/#expressions","title":"Expressions","text":"Expressions can be configured to calculate the output meter datum property, instead of using the default averaging algorithm. If an expression configuration exists with a Property that matches a configured (or implied) meter configuration Reading Property, then the expression will be invoked to generate the new meter reading value. See the Expressions guide for general expression language reference.
Warning
It is important to remember that the expression must calculate the next meter reading value. Typically this means it will calculate some differential value based on the amount of time that has elapsed and add that to the previous meter reading value.
"},{"location":"users/datum-filters/virtual-meter/#expression-root-object","title":"Expression root object","text":"The root object is a virtual meter expression object that lets you treat all datum properties, and filter parameters, as expression variables directly, along with the following properties:
Property Type Descriptionconfig
VirtualMeterConfig
A VirtualMeterConfig
object for the virtual meter configuration the expression is evaluating for. datum
GeneralNodeDatum
A Datum
object, populated with data from all property and virtual meter configurations. props
Map<String,Object>
Simple Map based access to the properties in datum
, and transform parameters, to simplify expressions. currDate
long
The current datum timestamp, as a millisecond epoch number. prevDate
long
The previous datum timestamp, as a millisecond epoch number. timeUnits
decimal
A decimal number of the difference between currDate
and prevDate
in the virtual meter configuration's Time Unit, rounded to at most 12 decimal digits. currInput
decimal
The current input property value. prevInput
decimal
The previous input property value. inputDiff
decimal
The difference between the currInput
and prevInput
values. prevReading
decimal
The previous output meter property value. The following methods are available:
Function Arguments Result Descriptionhas(name)
String
boolean
Returns true
if a property named name
is defined. timeUnits(scale)
int
decimal
Like the timeUnits
property but rounded to a specific number of decimal digits."},{"location":"users/datum-filters/virtual-meter/#expression-example-time-of-use-tariff-reading","title":"Expression example: time of use tariff reading","text":"Iagine you'd like to track a time-of-use cost associated with the energy readings captured by an energy meter. The Time-based Tariff Datum Filter filter could be used to add a tou
property to each datum, and then a virtual meter expression can be used to calculate a cost
reading property. The cost
property will be an accumulating property like any meter reading, so when SolarNetwork aggregates its value over time you will see the effective cost over each aggregate time period.
Here is a screen shot of the settings used for this scenario (note how the Reading Property value matches the Expression Property value):
The important settings to note are:
Setting Notes Virtual Meter - Property The input datum property is set towattHours
because we want to track changes in this property over time. Virtual Meter - Property Type We use Accumulating
here because that is the type of property wattHours
is. Virtual Meter - Reading Property The output reading property name. This must match the Expression - Property setting. Expression - Property This must match the Virtual Meter - Reading Property we want to evaluate the expression for. Expression - Property Type Typically this should be Accumulating
since we are generating a meter reading style property. Expression - Expression The expression to evaluate. This expression looks for the tou
property and when found the meter reading is incremented by the difference between the current and previous input wattHours
property values multiplied by tou
. If tou
is not available, then the previous meter reading value is returned (leaving the reading unchanged). Assuming a datum sample with properties like the following:
Property Valuetou
11.00
currDate
1621380669005
prevDate
1621380609005
timeUnits
0.016666666667
currInput
6095574
prevInput
6095462
inputDiff
112
prevReading
1022.782
Then here are some example expressions and the results they would produce:
Expression Result CommentinputDiff / 1000
0.112
Convert the input Wh property difference to kWh. inputDiff / 1000 * tou
1.232
Multiply the input kWh by the the $/kWh tariff value to calculate the cost for the elapsed time period. prevReading + (inputDiff / 1000 * tou)
1,024.014
Add the additional cost to the previous meter reading value to reach the new meter value."},{"location":"users/setup-app/","title":"Setup App","text":"The SolarNode Setup App allows you to manage SolarNode through a web browser.
To access the Setup App, you need to know the network address of your SolarNode. In many cases you can try accessing http://solarnode/. If that does not work, you need to find the network address SolarNode is using.
Here is an example screen shot of the SolarNode Setup App:
"},{"location":"users/setup-app/certificates/","title":"Certificates","text":"TODO
"},{"location":"users/setup-app/home/","title":"Home","text":"The Home page provides you with some links to resources and shows live datum-collecting activity.
The SolarNode home page
As datum are collected on the node, they will appear in the Datum Properties section:
"},{"location":"users/setup-app/login/","title":"Login","text":"You must log in to SolarNode to access its functions. The login credentials will have been created when you first set up SolarNode and associated it with your SolarNetwork account. The default Username will be your SolarNetwork account email address, and the password will have been randomly generated and shown to you.
Tip
You can change your SolarNode username and password after logging in. Note these credentials are not related, or tied to, your SolarNetwork login credentials.
"},{"location":"users/setup-app/plugins/","title":"Plugins","text":"TODO
"},{"location":"users/setup-app/profile/","title":"Profile","text":"The profile menu in the top-right of the Setup App menu give you access to change you password, change you username, logout, restart, and reset SolarNode.
Tip
Your SolarNode credentials are not related, or tied to, your SolarNetwork login credentials. Changing your SolarNode username or password does not change your SolarNetwork credentials.
The profile menu in SolarNode
"},{"location":"users/setup-app/profile/#change-password","title":"Change Password","text":"Choosing the Change Password menu item will take you to a form for changing your password. Fill in your current password and then your new password, then click the Submit Password button.
The Change Password form
As a result, you will stay on the same page, but a success (or error) message will be shown above the form:
"},{"location":"users/setup-app/profile/#change-username","title":"Change Username","text":"Choosing the Change Username menu item will take you to a form for changing your SolarNode username. Fill in your current password and then your new password, then click the Change Username button.
The Change Username form
As a result, you will stay on the same page, but a success (or error) message will be shown above the form:
"},{"location":"users/setup-app/profile/#logout","title":"Logout","text":"Choosing the Logout menu item will immediately end your SolarNode session and log you out. You will ned to log in again to use the Setup App further.
"},{"location":"users/setup-app/profile/#restart","title":"Restart","text":"You can either restart or reboot SolarNode from the Restart SolarNode menu. A restart means the SolarNode app will restart, while a reboot means the entire SolarNodeOS device will shut down and boot up again (restarting SolarNode along the way).
You might need to restart SolarNode to pick up new plugins you've installed, and you might need to reboot SolarNode if you've attached new sensors or other devices that require operating system support.
The Restart SolarNode menu brings up this dialog.
"},{"location":"users/setup-app/profile/#reset","title":"Reset","text":"You can perform a \"factory reset\" of SolarNode to remove all your custom settings, certificate, login credentials, and so on. You also have the option to preserve some SolarNodeOS settings like WiFi credentials if you like.
The Reset SolarNode menu brings up this dialog.
"},{"location":"users/setup-app/settings/","title":"Settings","text":"The Settings section in SolarNode Setup is where you can configure all available SolarNode settings.
The section is divided into the following pages:
This page allows you to backup and restore the configuration of your SolarNode.
"},{"location":"users/setup-app/settings/backups/#settings-backup-restore","title":"Settings Backup & Restore","text":"The Settings Backup & Restore section provides a way to manage Settings Files and Settings Resources, both of which are backups for the configured settings in SolarNode.
Warning
Settings Files and Settings Resources do not include the node's certificate, login credentials, or custom plugins. See the Full Backup & Restore section for managing \"full\" backups that do include those items.
The Export button allows you to download a Settings File with the currently active configuration.
The Import button allows you to upload a previously-downloaded Settings File.
The Settings Resource menu allows you to download specialized settings files, offered by some components in SolarNode. For example the Modbus Device Datum Source plugin offers a specialized CSV file format to make configuring those components easier.
The Auto backups area will have a list of links, each of which will let you download a Settings File that SolarNode automatically created. Each link shows you the date the settings backup was created.
"},{"location":"users/setup-app/settings/backups/#full-backup-restore","title":"Full Backup & Restore","text":"The Full Backup & Restore section lets you manage SolarNode \"full\" backups. Each full backup contains a snapshot of the settings you have configured, the node's certificate, login credentials, custom plugins, and more.
The Backup Service shows a list of the available Backup Services. Each service has its own settings that must be configured for the service to operate. After changing any of the selected service's settings, click the Save Settings button to save those changes.
The Backup button allows you to create a new backup.
The Backups menu allows you to download or restore any available backup.
The Import button allows you to upload a previously downloaded backup file.
"},{"location":"users/setup-app/settings/backups/#backup-services","title":"Backup Services","text":"SolarNode supports configurable Backup Service plugins to manage the storage of backup resources.
"},{"location":"users/setup-app/settings/backups/#file-system-backup-service","title":"File System Backup Service","text":"The File System Backup Service is the default Backup Service provided by SolarNode. It saves the backup onto the node itself. In order to be able to restore your settings if the node is damaged or lost, you must download a copy of a backup using the Download button, and save the file to a safe place.
Warning
If you do not download a copy of a backup, you run the risk of losing your settings and node certificate, making it impossible to restore the node in the event of a catastrophic hardware failure.
The configurable settings of the File System Backup Service are:
Setting Description Backup Directory The folder (on the node) where the backups will be saved. Copies The number of backup copies to keep, before deleting the oldest backup."},{"location":"users/setup-app/settings/backups/#s3-backup-service","title":"S3 Backup Service","text":"The S3 Backup Service creates cloud-based backups in AWS S3 (or any compatible provider). You must configure the credentials and S3 location details to use before any backups can be created.
Note
The S3 Backup Service requires the S3 Backup Service Plugin.
The configurable settings of the S3 Backup Service are:
Setting Description AWS Token The AWS access token to authenticate with. AWS Secret The AWS access token secret to authenticate with. AWS Region The name of the Amazon region to use, for exampleus-west-2
. S3 Bucket The name of the S3 bucket to use. S3 Path An optional root path to use for all backup data (typically a folder location). Storage Class A supported storage class, such as STANDARD (the default), STANDARD_IA
, INTELLIGENT_TIERING
, REDUCED_REDUNDANCY
, and so on. Copies The number of backup copies to keep, before deleting the oldest backup. Cache Seconds The amount of time to cache backup metadata such as the list of available backups, in seconds."},{"location":"users/setup-app/settings/components/","title":"Components","text":"The Components page lists all the configurable multi-instance components available on your SolarNode. Multi-instance means you can configure any number of a given component, each with their own settings.
For example imagine you want to collect data from a power meter, solar inverter, and weather station, all of which use the Modbus protocol. To do that you would configure three instances of the Modbus Device component, one for each device.
Use the Manage button for any listed compoennt to add or remove instances of that component.
An instance count badge appears next to any component with at least one instance configured.
"},{"location":"users/setup-app/settings/components/#manage-component","title":"Manage Component","text":"The component management page is shown when you click the Manage button for a multi-instance component. Each component instance's settings are independent, allowing you to integrate with multiple copies of a device or service.
For example if you connected a Modbus power meter and a Modbus solar inverter to a node, you would create two Modbus Device component instances, and configure them with settings appropriate for each device.
The component management screen allows you to add, update, and remove component instances.
"},{"location":"users/setup-app/settings/components/#add-new-instance","title":"Add new instance","text":"Add new component instances by clicking the Add new X button in the top-right, where X is the name of the component you are managing. You will be given the opportunity to assign a unique identifier to the new component instance:
When creating a new component instance you can provide a short name to identify it with.
When you add more than one component instance, the identifiers appear as clickable buttons that allow you to switch between the setting forms for each component.
Component instance buttons let you switch between each component instance.
"},{"location":"users/setup-app/settings/components/#saving-changes","title":"Saving changes","text":"Each setting will include a button that will show you a brief description of that setting.
Click for brief setting information.
After making any change, an Active value label will appear, showing the currently active value for that setting.
After making changes to any component instance's settings, click the Save All Changes button in the top-left to commit those changes.
Save All Changes works across all component instances
You can safely switch between and make changes on multiple component instance settings before clicking the Save All Changes button: your changes across all instances will be saved.
"},{"location":"users/setup-app/settings/components/#remove-or-reset-instances","title":"Remove or reset instances","text":"At the bottom of each component instance are buttons that let you delete or reset that component intance.
Buttons to delete or reset component instance.
The Delete button will remove that component instance from appearing, however the settings associated with that instance are preserved. If you re-add an instance with the same identifier then the previous settings will be restored. You can think of the Delete button as disabling the component, giving you the option to \"undo\" the deletion if you like.
The Restore button will reset the component to its factory defaults, removing any settings you have customized on that instance. The instance remains visible and you can re-configure the settings as needed.
"},{"location":"users/setup-app/settings/components/#remove-all-instances","title":"Remove all instances","text":"The Remove all button in the top-right of the page allows you to remove all component instances, including any customized settings on those instances.
Warning
The Remove all action will delete all your customized settings for all the component instances you are managing. When finished it will be as if you never configured this component before.
Remove all instances with the \"Remove all\" button.
You will be asked to confirm removing all instances:
Confirming the \"Remove all\" action.
"},{"location":"users/setup-app/settings/datum-filters/","title":"Datum Filters","text":"Datum Filters are services that manipulate datum generated by SolarNode plugins before they are uploaded to SolarNet. See the general Datum Filters section for more information about how datum filters work and what they are used for.
"},{"location":"users/setup-app/settings/datum-filters/#global-datum-filters","title":"Global Datum Filters","text":"Global Datum Filters are applied to datum just before posting to SolarNetwork. Once an instance is created, it is automatically active and will be applied to datum. This differs from User Datum Filters, which must be explicitly added to a service to be used, either dircectly or indirectly with a Datum Filter Chain.
Click the Manage button next to any Global Datum Filter component to create, update, and remove instances of that filter.
"},{"location":"users/setup-app/settings/datum-filters/#datum-queue","title":"Datum Queue","text":"All datum generated by SolarNode plugins are added to the Datum Queue for processing. The datum are processed in the order they are added to the queue. Datum Filters are applied to each datum, each filter's result passed to the next available filter until all filters have been applied.
The Datum Queue section of the Datum Filters page shows you some processing statistics and has a couple of settings you can change:
Setting Description Delay The minimum amount of time to delay processing datum after they have been added to the queue, in milliseconds. A small amount of delay allows parallel datum collection to get processed more reliably in time-based order. The default is 200 ms and usually does not need to be changed. Datum Filter The Service Name of a Datum Filter component to process datum with. See below for more information.The Datum Filter setting allows you to configure a single Datum Filter to apply to every datum captured in SolarNode. Since you can only configure one filter, it is very common to configure a Datum Filter Chain, where you can then configure any number of other filters to apply.
"},{"location":"users/setup-app/settings/datum-filters/#global-datum-filter-chain","title":"Global Datum Filter Chain","text":"The Global Datum Filter Chain provides a way to apply explicit User Datum Filters to datum just before posting to SolarNetwork.
Setting Description Active Global Filters A read-only list of any created Global Datum Filter component Service Name values. These filters are automatically applied, without needing to explicitly reference them in the Datum Filters list. Available User Filters A read-only list of Service Name values of User Datum Filter components that have been configured. You can copy any value from this list and paste it into the Datum Filters list to activate that filter. Datum Filters The list of Service Name values of User Datum Filter components to apply to datum."},{"location":"users/setup-app/settings/datum-filters/#user-datum-filters","title":"User Datum Filters","text":"User Datum Filters are not applied automatically: they must be explicitly added to a service to be used, either dircectly or indirectly with a Datum Filter Chain. This differs from Global Datum Filters which are automatically applied to datum just before being uploaded to SolarNet.
Click the Manage button next to any User Datum Filter component to create, update, and remove instances of that filter.
"},{"location":"users/setup-app/settings/logging/","title":"Logging","text":"The SolarNode UI supports configuring logger levels dynamically, without having to change the logging configuration file.
Warning
When SolarNode restarts all changes made in the Logger UI will be lost and the logger configuration will revert to whatever is configured in the logging configuration file.
The Logging page lists all the configured logger levels and lets you add new loggers and edit the existing ones using a simple form.
"},{"location":"users/setup-app/settings/op-modes/","title":"Operational Modes","text":"The SolarNode UI will show the list of active Operational Modes on the Settings > Operational Modes page. Click the + button to activate modes, and the button to deactivate an active mode.
The main Settings page also shows a read-only view of the active modes:
"},{"location":"users/setup-app/settings/services/","title":"Services","text":"Configurable services that are not Components appear on the Services page.
Each setting will include a button that will show you a brief description of that setting.
Click for brief setting information.
After making any change, an Active value label will appear, showing the currently active value for that setting.
In order to save your changes, you must click the Save All Changes button at the top of the page.
"},{"location":"users/setup-app/tools/","title":"Tools","text":"TODO
"},{"location":"users/setup-app/tools/command-console/","title":"Command Console","text":"SolarNode includes a Command Console page where troubleshooting commands from supporting plugins are displayed. The page shows a list of available command topics and lets you toggle the inclusion of each topic's commands at the bottom of the page.
"},{"location":"users/setup-app/tools/command-console/#modbus-commands","title":"Modbus Commands","text":"The Modbus TCP Connection and Modbus Serial Connection components support publishing mbpoll commands under a modbus command topic. The mbpoll
utility is included in SolarNodeOS; if not already installed you can install it by logging in to the SolarNodeOS shell and running the following command:
sudo apt install mbpoll\n
Modbus command logging must be enabled on each Modbus Connection component by toggling the CLI Publishing setting on.
Once CLI Publishing has been enabled, every Modbus request made on that connection will generate an equivalent mbpoll
command, and those commands will be shown on the Command Console.
You can copy any logged command and paste that into a SolarNodeOS shell to execute the Modbus request and see the results.
mbpoll -g -0 -1 -m rtu -b 4800 -s 1 -P none -a 1 -o 5 -t 4:hex -r 0 -c 2 /dev/tty.usbserial-FTYS9FWO\n-- Polling slave 1...\n[0000]: 0x00FC\n[0001]: 0x0A1F\n
"},{"location":"users/setup-app/tools/controls/","title":"Controls","text":"TODO
"},{"location":"users/sysadmin/","title":"System Administration","text":"SolarNode runs on SolarNodeOS, a Debian Linux-based operating system. If you are already familiar with Debian Linux, or one of the other Linux distributions built from Debian like Ubuntu Linux, you will find it pretty easy to get around in SolarNodeOS.
"},{"location":"users/sysadmin/#system-user-account","title":"System User Account","text":"SolarNodeOS ships with a solar
user account that you can use to log into the operating system. The default password is solar
but may have been changed by a system administrator.
Warning
The solar
user account is not related to the account you log into the SolarNode Setup App with.
To change the system user account's password, use the passwd
command.
$ passwd\nChanging password for solar.\nCurrent password:\nNew password:\nRetype new password:\npasswd: password updated successfully\n
Tip
Changing the solar
user's password is highly recommended when you first deploy a node.
Some commands require administrative permission. The solar
user can execute arbitrary commands with administrative permission by prefixing the command with sudo
. For example the reboot
command will reboot SolarNodeOS, but requires administrative permission.
$ sudo reboot\n
The sudo
command will prompt you for the solar
user's password and then execute the given command as the administrator user root
.
The solar
user can also become the root
administrator user by way of the su
command:
$ sudo su -\n
Once you have become the root
user you no longer need to use the sudo
command, as you already have administrative permissions.
SolarNodeOS comes with an SSH service active, which allows you to remotely connect and access the command line, using any SSH client.
"},{"location":"users/sysadmin/date-time/","title":"Date and Time","text":"SolarNodeOS includes date and time management functions through the timedatectl command. Run timedatectl status
to view information about the current date and time settings.
$ timedatectl status\n Local time: Fri 2023-05-26 03:41:42 BST\n Universal time: Fri 2023-05-26 02:41:42 UTC\n RTC time: n/a\n Time zone: Europe/London (BST, +0100)\nSystem clock synchronized: yes\n NTP service: active\n RTC in local TZ: no\n
"},{"location":"users/sysadmin/date-time/#changing-the-local-time-zone","title":"Changing the local time zone","text":"SolarNodeOS uses the UTC
time zone by default. If you would like to change this, use the timedatectl set-timezone
$ sudo timedatectl set-timezone Pacific/Auckland\n
You can list the available time zone names by running timedatectl list-timezones
.
SolarNodeOS uses the systemd-timesyncd service to synchronize the node's clock with internet time servers. Normally no configuration is necessary. You can check the status of the network time synchronization with timedatectl like:
$ timedatectl status\n Local time: Fri 2023-05-26 03:41:42 BST\n Universal time: Fri 2023-05-26 02:41:42 UTC\n RTC time: n/a\n Time zone: Europe/London (BST, +0100)\nSystem clock synchronized: yes\n NTP service: active\n RTC in local TZ: no\n
Warning
For internet time synchronization to work, SolarNode needs to access Network Time Protocol (NTP) servers, using UDP over port 123.
"},{"location":"users/sysadmin/date-time/#network-time-server-configuration","title":"Network time server configuration","text":"The NTP servers that SolarNodeOS uses are configured in the /etc/systemd/timesyncd.conf file. The default configuration uses a pool of Debian servers, which should be suitable for most nodes. If you would like to change the configuration, edit the timesyncd.conf
file and change the NTP=
line, for example
[Time]\nNTP=my.ntp.example.com\n
"},{"location":"users/sysadmin/date-time/#setting-the-date-and-time","title":"Setting the date and time","text":"In order to manually set the date and time, NTP time synchronization must be disabled with timedatectl set-ntp false
. Then you can run timedatectl set-time
to set the date:
$ sudo timedatectl set-ntp false\n$ sudo timedatectl set-time \"2023-05-26 17:30:00\"\n
If you then look at the timedatectl status
you will see that NTP has been disabled:
$ timedatectl\n Local time: Fri 2023-05-26 17:30:30 NZST\n Universal time: Fri 2023-05-26 05:30:30 UTC\n RTC time: n/a\n Time zone: Pacific/Auckland (NZST, +1200)\nSystem clock synchronized: no # (1)!\nNTP service: inactive # (2)!\nRTC in local TZ: no\n
You can re-enable NTP time synchronization like this:
Enabling NTP time synchronization$ sudo timedatectl set-ntp true\n
"},{"location":"users/sysadmin/networking/","title":"Networking","text":"SolarNodeOS uses the systemd-networkd service to manage network devices and their settings. A network device relates to a physical network hardware device or a software networking component, as recognized and named by the operating system. For example, the first available ethernet device is typically named eth0
and the first available WiFi device wlan0
.
Network configuration is stored in .network
files in the /etc/systemd/network
directory. SolarNodeOS comes with default support for ethernet and WiFi network devices.
The default 10-eth.network
file configures the default ethernet network eth0
to use DHCP to automatically obtain a network address, routing information, and DNS servers to use.
SolarNodeOS networks are configured to use DHCP by default. If you need to re-configure a network to use DHCP, change the configuration to look like this:
Ethernet network with DHCP configuration[Match]\nName=eth0\n\n[Network]\nDHCP=yes\n
Use a Name value specific to your network.
"},{"location":"users/sysadmin/networking/#static-network-configuration","title":"Static network configuration","text":"If you need to use a static network address, instead of DHCP, edit the network configuration file (for example, the 10-eth.network
file for the ethernet network), and change it to look like this:
[Match]\nName=eth0\n\n[Network]\nDNS=1.1.1.1\n\n[Address]\nAddress=192.168.3.10/24\n\n[Route]\nGateway=192.168.3.1\n
Use Name, DNS, Address, and Gateway values specific to your network. The same static configuration for a single address can also be specified in a slightly more condensed form, moving everything into the [Network]
section:
[Match]\nName=eth0\n\n[Network]\nAddress=192.168.3.10/24\nGateway=192.168.3.1\nDNS=1.1.1.1\n
"},{"location":"users/sysadmin/networking/#wifi-network-configuration","title":"WiFi network configuration","text":"The default 20-wlan.network
file configures the default WiFi network wlan0
to use DHCP to automatically obtain a network address, routing information, and DNS servers to use. To configure the WiFi network SolarNode should connect to, run this command:
sudo dpkg-reconfigure sn-wifi\n
You will then be prompted to supply the following WiFi settings:
NZ
Note about WiFi support
WiFi support is provided by the sn-wifi
package, which may not be installed. See the Package Maintenance section for information about installing packages.
For initial setup of a the WiFi settings on a SolarNode it can be helpful for SolarNode to create its own WiFi network, as an access point. The sn-wifi-autoap@wlan0
service can be used for this. When enabled, it will monitor the WiFi network status, and when the WiFi connection fails for any reason it will enable a SolarNode
WiFi network using a gateway IP address of 192.168.16.1
. Thus when the SolarNode access point is enabled, you can connect to that network from your own device and reach the Setup App at http://192.168.16.1/
or the command line via ssh solar@192.168.16.1
.
The default 21-wlan-ap.network
file configures the default WiFi network wlan0
to act as an Access Point
This service is not enabled by default. To enable it, run the following:
sudo systemctl enable --now sn-wifi-autoap@wlan0\n
Once enabled, if SolarNode cannot connect to the configured WiFi network, it will create its own SolarNode
network. By default the password for this network is solarnode
. The Access Point network configuration is defined in the /etc/network/wpa_supplicant-wlan0.conf
file, in a section like this:
### access-point mode\nnetwork={\n ssid=\"SolarNode\"\n mode=2\n key_mgmt=WPA-PSK\n psk=\"solarnode\"\n frequency=2462\n}\n
"},{"location":"users/sysadmin/networking/#firewall","title":"Firewall","text":"SolarNodeOS uses the nftables system to provide an IP firewall to SolarNode. By default only the following incoming TCP ports are allowed:
Port Description 22 SSH access 80 HTTP SolarNode UI 8080 HTTP SolarNode UI alternate port"},{"location":"users/sysadmin/networking/#open-additional-ip-ports","title":"Open additional IP ports","text":"You can edit the /etc/nftables.conf
file to add additional open IP ports as needed. A good place to insert new rules is after the lines that open ports 80 and 8080:
# Allows HTTP\nadd rule ip filter INPUT tcp dport 80 counter accept\nadd rule ip filter INPUT tcp dport 8080 counter accept\n
For example, if you would like to open UDP port 50222 to support the Weatherflow Tempest weather station, add the following after the above lines:
# Allow WeatherFlow Tempest messages\nadd rule ip filter INPUT udp dport 50222 counter accept\n
"},{"location":"users/sysadmin/networking/#reload-firewall-rules","title":"Reload firewall rules","text":"If you make changes to the firewall rules in /etc/nftables.conf
, run the following command to reload the firewall configuration:
sudo systemctl reload nftables\n
"},{"location":"users/sysadmin/packages/","title":"Package Maintenance","text":"SolarNodeOS supports a wide variety of software packages. You can install new packages as well as apply package updates as they become available. The apt command performs these tasks.
"},{"location":"users/sysadmin/packages/#update-package-metadata","title":"Update package metadata","text":"For SolarNodeOS to know what packages, or package updates, are available, you need to periodically update the available package information. This is done with the apt update
command:
sudo apt update # (1)!\n
sudo
command runs other commands with administrative privledges. It will prompt you for your user account password (typically the solar
user).Use the apt list
command to list the installed packages:
apt list --installed\n
"},{"location":"users/sysadmin/packages/#update-packages","title":"Update packages","text":"To see if there are any package updates available, run apt list
like this:
apt list --upgradable\n
If there are updates available, that will show them. You can apply all package updates with the apt upgrade
command, like this:
sudo apt upgrade\n
If you want to install an update for a specific package, use the apt install
command instead.
Tip
The apt upgrade
command will update existing packages and install packages that are required by those packages, but it will never remove an existing package. Sometimes you will want to allow packages to be removed during the upgrade process; to do that use the apt full-upgrade
command.
Use the apt search
command to search for packages. By default this will match package names and their descriptions. You can search just for package names by including a --names-only
argument.
# search for \"name\" across package names and descriptions\napt search name\n\n# search for \"name\" across package names only\napt search --names-only name\n\n# multiple search terms are logically \"and\"-ed together\napt search name1 name2\n
"},{"location":"users/sysadmin/packages/#install-packages","title":"Install packages","text":"The apt install
command will install an available package, or an individual package update.
sudo apt install mypackage\n
"},{"location":"users/sysadmin/packages/#remove-packages","title":"Remove packages","text":"You can remove packages with the apt remove
command. That command will preserve any system configuration associated with the package(s); if you would like to also remove that you can use the apt purge
command.
sudo apt remove mypackage\n\n# use `purge` to also remove configuration\nsudo apt purge mypackage\n
"},{"location":"users/sysadmin/solarnode-service/","title":"SolarNode Service","text":"SolarNode is managed as a systemd service. There are some shortcut commands to more easily manage the service.
Command Descriptionsn-start
Start the SolarNode service. sn-restart
Restart the SolarNode service. sn-status
View status information about the SolarNode service (see if it is running or not). sn-stop
Stop the SolarNode service. The sn-stop
command requires administrative permissions, so you may be prompted for your system account password (usually the solar
user's password).
You can modify the environment variables passed to the SolarNode service, as well as modify the Java runtime options used. You may want to do this, for example, to turn on Java remote debugging support or to give the SolarNode process more memory.
The systemd solarnode.service
unit will load the /etc/solarnode/env.conf
environment configuration file if it is present. You can define arbitrary environment variables using a simple key=value
syntax.
SolarNodeOS ships with a /etc/solarnode/env.conf.example
file you can use for reference.
The sn-solarssh
package in SolarNodeOS provides a solarssh
command-line tool for managing SolarSSH connections.
To view the node's public SSH key, you can execute solarssh showkey
.
$ solarssh showkey\nssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG7DWIuC2MVHy/gfD32sCayoVFpGVbZ8VXuQubmKjwyx SolarNode\n
"},{"location":"users/sysadmin/solarssh/#list-solarssh-sessions","title":"List SolarSSH sessions","text":"Run solarssh list
to view all available SolarSSH sessions.
$ solarssh list\nb0ae36e0-06ae-4d3d-b34e-9bf2ca8049f1,ssh.solarnetwork.net,8022,43340\n
"},{"location":"users/sysadmin/solarssh/#view-solarssh-session-status","title":"View SolarSSH session status","text":"Using the output of solarssh list
you can view the SSH connection status of a specific SSH session with solarssh status
, like this:
$ solarssh -c b0ae36e0-06ae-4d3d-b34e-9bf2ca8049f1,ssh.solarnetwork.net,8022,43340 status\nactive\n
"},{"location":"users/sysadmin/solarssh/#stop-solarssh-session","title":"Stop SolarSSH session","text":"You can force a SolarSSH session to end using solarssh stop
, like this:
$ solarssh -c b0ae36e0-06ae-4d3d-b34e-9bf2ca8049f1,ssh.solarnetwork.net,8022,43340 stop\n
"}]}
\ No newline at end of file
diff --git a/sitemap.xml.gz b/sitemap.xml.gz
index 9e611e1..5804e87 100644
Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ
diff --git a/users/setup-app/home/index.html b/users/setup-app/home/index.html
index b304352..c849c57 100644
--- a/users/setup-app/home/index.html
+++ b/users/setup-app/home/index.html
@@ -631,8 +631,17 @@
TODO
- +The Home page provides you with some links to resources and shows live datum-collecting +activity.
+ +As datum are collected on the node, they will appear in the Datum Properties +section:
+Here is an example screen shot of the SolarNode Setup App:
- + diff --git a/users/setup-app/login/index.html b/users/setup-app/login/index.html index ac55d11..f81454a 100644 --- a/users/setup-app/login/index.html +++ b/users/setup-app/login/index.html @@ -634,14 +634,16 @@You must log in to SolarNode to access its functions. The login credentials will have been created when you first set up SolarNode and associated it with your SolarNetwork account. The default Username -will be your SolarNetwork account email address, and the password will have been randomly generated +will be your SolarNetwork account email address, and the password will have been randomly generated and shown to you.
Tip
You can change your SolarNode username and password after logging in. Note these credentials are not related, or tied to, your SolarNetwork login credentials.