-
Hi all, In OTAA mode, only DevNonce and the JoinNonce are needed to be stored in EEPROM after the Join success. The LoraWan Stack (4.5.1), reads and writes “All data” very frequently (about each Uplink). All data takes about 1452 bytes. Is there a possibility to enhance this feature (saving only needed data + reducing the frequency)? And regarding the ABP mode, is there a possibility to reduce the frequency of storing all data (for example each 100 or 1000 uplinks) ? What could happen if we lose the context in ABP (reset, etc.) ? Thank you for your reply. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 17 replies
-
This subject has already been discussed on several Issues/Discussions.
In conclusion, we have already analyzed and tried to reduce the amount of data to be stored on the NVM memory. Nowadays, the MAC layer notifies the application layer to only store what changed since last notification. The full amount is only written one time (First boot or when the NVM data is reset). Then after every uplink around 80 bytes (EU868) are stored. Depending on MAC commands sent by the Network Server the amount can be bigger. One of the reasons we decided to update the NVM after each uplink is to be able to power off the end-device and just continue the normal operation once powered up (without joining the network again). In the provided example we use the MCU internal FLASH memory because it is the only non volatile memory that is available on this project supported platforms examples. There are a few possibilities to increase the NVM life time
In the future it would be nice if you could post this kind of questions on the project Discussions tab. It is a better place to engage discussions and then we can agree if it is an issue or not. |
Beta Was this translation helpful? Give feedback.
-
Working on STM32L082CZ based device, which I implemented off the B-L072Z-LRWAN-1 example implementation, I found that the Findings:
I found it was fairly trivial to optimise the implementation of Additionally, whilst the STM32L0 optimises the write process to avoid erasing when unnecessary, I found that a simple conditional that checks to see if each word value differs before writing saved about 30% time. Finally, there appeared to be a glitch in that the STM32L0 would report a Write Protection Error (FLASH_FLAG_WPRERR) prior to writing word values. I found reports of this on the ST Micro forums but without clear explanation of the cause. This didn't seem to be the case when all writes were single byte. To resolve this, I added a call to clear the WPRERR before each write process. Here are the timings I recorded:
LmnStatus_t EepromMcuWriteBuffer( uint16_t addr, uint8_t *buffer, uint16_t size )
{
LmnStatus_t status = LMN_STATUS_ERROR;
assert_param( ( DATA_EEPROM_BASE + addr ) >= DATA_EEPROM_BASE );
assert_param( buffer != NULL );
assert_param( size < ( DATA_EEPROM_BANK2_END - DATA_EEPROM_BASE ) );
__HAL_FLASH_CLEAR_FLAG(FLASH_FLAG_WRPERR);
if( HAL_FLASHEx_DATAEEPROM_Unlock( ) == HAL_OK )
{
CRITICAL_SECTION_BEGIN( );
size_t offset = 0;
bool ok = true;
while (ok && offset < size && ((addr + offset) % sizeof(uint32_t))) {
ok = HAL_FLASHEx_DATAEEPROM_Program( FLASH_TYPEPROGRAMDATA_BYTE,
( DATA_EEPROM_BASE + addr + offset ),
buffer[offset] ) == HAL_OK;
offset += ok ? 1 : 0;
}
while(ok && offset < size && (offset % sizeof(uint32_t) == 0)) {
uint32_t *ptr = (uint32_t *)&buffer[offset];
if (*ptr != *((uint32_t *)DATA_EEPROM_BASE + addr + offset)) {
ok = HAL_FLASHEx_DATAEEPROM_Program( FLASH_TYPEPROGRAMDATA_WORD,
( DATA_EEPROM_BASE + addr + offset ),
*ptr ) == HAL_OK;
}
offset += ok ? sizeof(uint32_t) : 0;
}
while (ok && offset < size) {
ok = HAL_FLASHEx_DATAEEPROM_Program( FLASH_TYPEPROGRAMDATA_BYTE,
( DATA_EEPROM_BASE + addr + offset ),
buffer[offset] ) == HAL_OK;
offset += ok ? 1 : 0;
}
CRITICAL_SECTION_END( );
status = ok ? LMN_STATUS_OK : LMN_STATUS_ERROR;
}
HAL_FLASHEx_DATAEEPROM_Lock( );
return status;
} Whether this actually saves any write wear or not is not clear. It certainly saves some processor time which equates to a power saving, important on my battery powered devices. If your device is battery powered, preserves RAM and doesn't reset, then you may not in fact need NVM persistence. Putting this here as a potential improvement, and would be happy to make a PR if appropriate. The linked issues/discussion suggest that this type of optimisation might be beyond the scope of the project. |
Beta Was this translation helpful? Give feedback.
-
I'd appreciate some clarification on when persistence is actually required. The application is working to LoRaWAN 1.0.4 (monotonic DevNonce), OTAA, I'm currently testing head of master in anticipation of a 4.7.0 release. My application is on a battery powered device. Power consumption is such that the battery is likely to last 5+ years and defines the lifetime of the device. Under normal circumstances the device will not be power cycled or reset. We do have activated/deactivated states controlled by connection of the sensor, and the device only joins the network once activated. Normally, it would never be deactivated, left installed in the same place for its lifetime. If it is moved, sensor removal deactivates and sensor insertion will activate and instigate a new join (as the device might be moved or other network factors changed). The device remains in a low power state when deactivated and therefore DevNonce is stored in RAM. As such, my understanding is that there is no need for the entire session to be preserved to NVM, as its preserved in RAM. Given that it would be possible to reset the device due to someone interfering with the battery, or the application doing a watchdog reset, its necessary to preserve the DevNonce in NVM. This rare occurrence seems to avoid the risk of excessive join attempts. It seems therefore that solely the DevNonce can be written (with appropriate CRC validation) to NVM, given that the device will join anew in the event of a reset/power cycle. This will also drastically reduce the number of potential NVM writes, as well as the amount of data written. The application checks network connectivity periodically (via LinkCheckReq) and in the event of prolonged loss of connectivity will go into a periodic join state that respects the principles of retransmission backoff. This is the only case where one can expect sustained DevNonce updates, once for each join attempt, approximately 1-2 time per hour until connectivity is restored (power loss at the gateway, or network backhaul failure, being the most likely causes). I appreciate that the provided application examples and the NvmDataMgmt module, cover the lowest common denominator of devices that don't have RAM preservation during low power modes. It seems that this should be documented clearly, or alternatively that there be a build option provided to either:
Regardless of a change to the API, confirmation/critique of my approach would be appreciated. |
Beta Was this translation helpful? Give feedback.
This subject has already been discussed on several Issues/Discussions.
In conclusion, we have already analyzed and tried to reduce the amount of data to be stored on the NVM memory.
The current amount is what we believe to be the minimum.
Nowadays, the MAC layer notifies the application layer to only store what changed since last notification.
In the provided example we decided to store each time the MAC layer notifies the application layer. However, nothing prevents you to not do so (not recommended).
The full amount is only written one time (First boot or when the NVM data is reset). Then after every uplink around 80 bytes (EU868) are stored. Depending on MA…