Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option -S for enabling write_mostly does not work #340

Closed
ThomasGoering opened this issue Aug 14, 2024 · 9 comments
Closed

Option -S for enabling write_mostly does not work #340

ThomasGoering opened this issue Aug 14, 2024 · 9 comments

Comments

@ThomasGoering
Copy link

My DS920+ has two internal 14TB HDDs, two internal 2TB SSDs and two 1TB M.2 drives, the DX517 expansion unit has three 12TB HDDs.

I'm running your script with option -S to enable write_mostly for the internal SSDs but noticed that there was no output that confirms setting write_mostly. This is the output of the script (including some TEST_DEBUG output that I inserted):

Synology_HDD_db v3.5.97
DS920+ DSM 7.2.1-69057-5 
StorageManager 1.0.0-0017

ds920+_host_v7 version 8053

ds920+_host version 4021

Using options: --noupdate --ram -S --email
Running from: /volume3/scripts/syno_test.sh

HDD/SSD models found: 4
Red SA500 2.5 2TB,540400WD,2000 GB
WD120EFAX-68UNTN0,81.00A81,11999 GB
WD120EFBX-68B0EN0,85.00A85,11999 GB
WD140EFFX-68VBXN0,81.00A81,14000 GB

M.2 drive models found: 2
WD Red SN700 1000GB,111130WD,1000 GB
WD Red SN700 1000GB,111150WD,1000 GB

No M.2 PCIe cards found

Expansion Unit models found: 1
DX517

Red SA500 2.5 2TB already exists in ds920+_host_v7.db
Red SA500 2.5 2TB already exists in ds920+_host.db
Red SA500 2.5 2TB already exists in ds920+_host.db.new
Red SA500 2.5 2TB already exists in dx517_v7.db
Red SA500 2.5 2TB already exists in dx517.db
Red SA500 2.5 2TB already exists in dx517.db.new
WD120EFAX-68UNTN0 already exists in ds920+_host_v7.db
WD120EFAX-68UNTN0 already exists in ds920+_host.db
WD120EFAX-68UNTN0 already exists in ds920+_host.db.new
WD120EFAX-68UNTN0 already exists in dx517_v7.db
WD120EFAX-68UNTN0 already exists in dx517.db
WD120EFAX-68UNTN0 already exists in dx517.db.new
WD120EFBX-68B0EN0 already exists in ds920+_host_v7.db
WD120EFBX-68B0EN0 already exists in ds920+_host.db
WD120EFBX-68B0EN0 already exists in ds920+_host.db.new
WD120EFBX-68B0EN0 already exists in dx517_v7.db
WD120EFBX-68B0EN0 already exists in dx517.db
WD120EFBX-68B0EN0 already exists in dx517.db.new
WD140EFFX-68VBXN0 already exists in ds920+_host_v7.db
WD140EFFX-68VBXN0 already exists in ds920+_host.db
WD140EFFX-68VBXN0 already exists in ds920+_host.db.new
WD140EFFX-68VBXN0 already exists in dx517_v7.db
WD140EFFX-68VBXN0 already exists in dx517.db
WD140EFFX-68VBXN0 already exists in dx517.db.new
Updated WD Red SN700 1000GB in ds920+_host_v7.db
WD Red SN700 1000GB already exists in ds920+_host.db
WD Red SN700 1000GB already exists in ds920+_host.db.new
Updated WD Red SN700 1000GB in ds920+_host_v7.db
WD Red SN700 1000GB already exists in ds920+_host.db
WD Red SN700 1000GB already exists in ds920+_host.db.new
TEST_DEBUG: idrive=sata1, internal_drive=
TEST_DEBUG: is_ssd
TEST_DEBUG: idrive=sata2, internal_drive=
TEST_DEBUG: is_ssd
TEST_DEBUG: idrive=sata3, internal_drive=
TEST_DEBUG: is_ssd
TEST_DEBUG: idrive=sata4, internal_drive=
TEST_DEBUG: is_ssd
TEST_DEBUG: internal_ssd_qty=4
TEST_DEBUG: internal_hdds=

Support disk compatibility already enabled.

Support memory compatibility already disabled.

Max memory already set to 20 GB.

NVMe support already enabled.

M.2 volume support already enabled.

Drive db auto updates already disabled.

DSM successfully checked disk compatibility.

You may need to reboot the Synology to see the changes.

The debug outputs were inserted in the else part after line 1755. The output of TEST_DEBUG: idrive=sata1, internal_drive= is done after line internal_drive="$(echo "$idrive" | awk '{printf $4}')" with this echo "TEST_DEBUG: idrive=$idrive, internal_drive=$internal_drive".

It looks like internal_drive is not expected to be empty. Am I missing something or do you need more details?

@007revad
Copy link
Owner

007revad commented Aug 15, 2024

I believe I've found the issue. Can you test the fix.

Change lines 1759 and 1760 from this:

            internal_drive="$(echo "$idrive" | awk '{printf $4}')"
            if synodisk --isssd "$internal_drive" >/dev/null; then

to this:

            #internal_drive="$(echo "$idrive" | awk '{printf $4}')"
            if synodisk --isssd /dev/"${idrive:?}" >/dev/null; then

and change line 1811 from this:

                internal_hdds+=("$internal_drive")

to this:

                internal_hdds+=("$idrive")

007revad added a commit that referenced this issue Aug 15, 2024
- Bug fix for "Enable write_mostly on slow internal drives so DSM runs from the fast internal drive(s)." Issue #340
@007revad 007revad mentioned this issue Aug 15, 2024
@007revad
Copy link
Owner

I've released v3.5.98 which fixes this issue.

https://github.com/007revad/Synology_HDD_db/releases

@ThomasGoering
Copy link
Author

Great, this is the new output:

Synology_HDD_db v3.5.98
DS920+ DSM 7.2.1-69057-5 
StorageManager 1.0.0-0017

ds920+_host_v7 version 8053

ds920+_host version 4021

Using options: --noupdate --ram -S --email
Running from: /volume3/scripts/syno_hdd_db.sh

HDD/SSD models found: 4
Red SA500 2.5 2TB,540400WD,2000 GB
WD120EFAX-68UNTN0,81.00A81,11999 GB
WD120EFBX-68B0EN0,85.00A85,11999 GB
WD140EFFX-68VBXN0,81.00A81,14000 GB

M.2 drive models found: 2
WD Red SN700 1000GB,111130WD,1000 GB
WD Red SN700 1000GB,111150WD,1000 GB

No M.2 PCIe cards found

Expansion Unit models found: 1
DX517

Added Red SA500 2.5 2TB to ds920+_host_v7.db
Edited unverified drives in ds920+_host_v7.db
Added Red SA500 2.5 2TB to ds920+_host.db
Added Red SA500 2.5 2TB to ds920+_host.db.new
Added Red SA500 2.5 2TB to dx517_v7.db
Edited unverified drives in dx517_v7.db
Added Red SA500 2.5 2TB to dx517.db
Added Red SA500 2.5 2TB to dx517.db.new
WD120EFAX-68UNTN0 already exists in ds920+_host_v7.db
WD120EFAX-68UNTN0 already exists in ds920+_host.db
WD120EFAX-68UNTN0 already exists in ds920+_host.db.new
WD120EFAX-68UNTN0 already exists in dx517_v7.db
WD120EFAX-68UNTN0 already exists in dx517.db
WD120EFAX-68UNTN0 already exists in dx517.db.new
WD120EFBX-68B0EN0 already exists in ds920+_host_v7.db
WD120EFBX-68B0EN0 already exists in ds920+_host.db
WD120EFBX-68B0EN0 already exists in ds920+_host.db.new
WD120EFBX-68B0EN0 already exists in dx517_v7.db
Added WD120EFBX-68B0EN0 to dx517.db
WD120EFBX-68B0EN0 already exists in dx517.db.new
Added WD140EFFX-68VBXN0 to ds920+_host_v7.db
Added WD140EFFX-68VBXN0 to ds920+_host.db
Added WD140EFFX-68VBXN0 to ds920+_host.db.new
WD140EFFX-68VBXN0 already exists in dx517_v7.db
WD140EFFX-68VBXN0 already exists in dx517.db
WD140EFFX-68VBXN0 already exists in dx517.db.new
Added WD Red SN700 1000GB to ds920+_host_v7.db
Added WD Red SN700 1000GB to ds920+_host.db
Added WD Red SN700 1000GB to ds920+_host.db.new
Updated WD Red SN700 1000GB in ds920+_host_v7.db
WD Red SN700 1000GB already exists in ds920+_host.db
WD Red SN700 1000GB already exists in ds920+_host.db.new

Setting internal HDDs state to write_mostly
WD140EFFX-68VBXN0
  sata3 DSM partition:  in_sync,write_mostly
  sata3 Swap partition: in_sync,write_mostly
WD140EFFX-68VBXN0
  sata4 DSM partition:  in_sync,write_mostly
  sata4 Swap partition: in_sync,write_mostly

Support disk compatibility already enabled.

Disabled support memory compatibility.

Set max memory to 20 GB.

NVMe support already enabled.

Enabled M.2 volume support.

Disabled drive db auto updates.

DSM successfully checked disk compatibility.

You may need to reboot the Synology to see the changes.

Looks good!

I have one more question about write_mostly. In #318 you wrote that it looks like the setting must be done after each boot again. Is this the case?

If yes, then what is the recommended way to schedule the syno_hdd_db.sh script? I ask because I find different instructions for scheduling it:

Is this difference only when syno_hdd_db.sh is used together with syno_enable_dedupe.sh?

@007revad
Copy link
Owner

Is this the case?

No.

I just disabled my boot schedule for syno_hdd_db.sh and rebooted my DS720+. After the reboot I checked both drives to see if the HDD still had most_writely set and it did. So the write_mostly setting survives a reboot.

sata1 is the HDD and sata2 is the SSD

~# cat /sys/block/md0/md/dev-sata1p1/state
in_sync,write_mostly
~# cat /sys/block/md1/md/dev-sata1p2/state
in_sync,write_mostly

~# cat /sys/block/md0/md/dev-sata2p1/state
in_sync
~# cat /sys/block/md1/md/dev-sata2p2/state
in_sync

I'd still schedule syno_hdd_db.sh to run at boot so you don't have to remember to run it after a DSM update or Storage Manager package update.

@ThomasGoering
Copy link
Author

Ok, I will schedule syno_hdd_db.sh at boot. Is scheduling syno_enable_dedupe.sh at shutdown still recommended?

I found another issue (don't know, maybe it's intended behavior): I tried to restore all changes that syno_hdd_db.sh did with this call:

/volume3/scripts/syno_hdd_db.sh --restore --ssd=restore

It looks like that --ssd=restore is ignored when option --restore is set at the same time. But --ssd=restore works when --restore is not set.

007revad added a commit that referenced this issue Aug 15, 2024
v3.5.99-RC
- Changed to support "--restore --ssd=restore" to restore write_mostly when restoring all other changes. Issue #340
@007revad
Copy link
Owner

I've changed it so "--ssd=restore --restore" can be used together. Note: At the moment --ssd=restore must be before --restore

https://github.com/007revad/Synology_HDD_db/releases/tag/v3.5.99-RC

@ThomasGoering
Copy link
Author

Thanks. It works with two issues:

  1. The CHANGES file and the description of the release states:

Changed to support "--restore --ssd=restore" to restore write_mostly when restoring all other changes. Issue https://github.com/007revad/Synology_HDD_db/issues/340

This is the wrong order of the options. In your comment above you wrote that the script can be used with the options in this order: --ssd=restore --restore

  1. This is an excerpt from the output of the script using these options: --ssd=restore --restore --email
Restoring internal drive's state
�[0;33mRed SA500 2.5 2TB�[0m
  sata1 DSM partition:  in_sync
  sata1 Swap partition: in_sync
�[0;33mRed SA500 2.5 2TB�[0m
  sata1 DSM partition:  in_sync
  sata1 Swap partition: in_sync
�[0;33mRed SA500 2.5 2TB�[0m
  sata1 DSM partition:  in_sync
  sata1 Swap partition: in_sync
�[0;33mRed SA500 2.5 2TB�[0m
  sata2 DSM partition:  in_sync
  sata2 Swap partition: in_sync
�[0;33mRed SA500 2.5 2TB�[0m
  sata2 DSM partition:  in_sync
  sata2 Swap partition: in_sync
�[0;33mRed SA500 2.5 2TB�[0m
  sata2 DSM partition:  in_sync
  sata2 Swap partition: in_sync
�[0;33mWD140EFFX-68VBXN0�[0m
  sata3 DSM partition:  in_sync
  sata3 Swap partition: in_sync
�[0;33mWD140EFFX-68VBXN0�[0m
  sata3 DSM partition:  in_sync
  sata3 Swap partition: in_sync
�[0;33mWD140EFFX-68VBXN0�[0m
  sata3 DSM partition:  in_sync
  sata3 Swap partition: in_sync
�[0;33mWD140EFFX-68VBXN0�[0m
  sata4 DSM partition:  in_sync
  sata4 Swap partition: in_sync
�[0;33mWD140EFFX-68VBXN0�[0m
  sata4 DSM partition:  in_sync
  sata4 Swap partition: in_sync
�[0;33mWD140EFFX-68VBXN0�[0m
  sata4 DSM partition:  in_sync
  sata4 Swap partition: in_sync

It still uses color codes despite the fact that option --email is used. And it seems to be resetting write_mostly three times for each drive.

007revad added a commit that referenced this issue Aug 19, 2024
v3.5.100-RC
- Changed to support "--restore --ssd=restore" to restore write_mostly when restoring all other changes. Issue #340
  - When using --restore you can also use --ssd=restore, -e or --email
@007revad
Copy link
Owner

007revad commented Aug 19, 2024

I've changed it in v3.5.100-RC so when --restore is used it also supports --ssd=restore, -e or --email and in any order. And fixed it so it only resets write_mostly once for each drive.

https://github.com/007revad/Synology_HDD_db/releases

EDIT Forgot to mention it now only resets write_mostly on drives that have write_mostly set. To reduce the spammy output for people with lots of drives, and to make it clear which drives were processed.

@ThomasGoering
Copy link
Author

Thanks a lot, it is now resetting write_mostly as you described!

The output still uses color codes with option --email but I don't really care. This issue can be closed as resetting write_mostly now works.

Synology_HDD_db v3.5.100
DS920+ DSM 7.2.1-69057-5 
StorageManager 1.0.0-0017

ds920+_host_v7 version 8053

ds920+_host version 4021

Using options: --restore --ssd=restore --email
Running from: /volume3/scripts/syno_hdd_db.sh

Restored support_memory_compatibility = yes
Restored mem_max_mb = 8192
Restored support_m2_pool = no
Restored storage_panel.js

Restored ds920+_host.db
Restored ds920+_host.db.new
Restored ds920+_host_v7.db
Restored dx1211_v7.db
Restored dx1215_v7.db
Restored dx1215ii_v7.db
Restored dx1222_v7.db
Restored dx213_v7.db
Restored dx510_v7.db
Restored dx513_v7.db
Restored dx517.db
Restored dx517.db.new
Restored dx517_v7.db
Restored dx5_v7.db
Restored eunit_rule.db
Restored fax224_v7.db
Restored fx2421_v7.db
Restored host_rule.db
Restored rx1211_v7.db
Restored rx1211rp_v7.db
Restored rx1213sas_v7.db
Restored rx1214_v7.db
Restored rx1214rp_v7.db
Restored rx1216sas_v7.db
Restored rx1217_v7.db
Restored rx1217rp_v7.db
Restored rx1217sas_v7.db
Restored rx1222sas_v7.db
Restored rx1223rp_v7.db
Restored rx1224rp_v7.db
Restored rx2417sas_v7.db
Restored rx410_v7.db
Restored rx415_v7.db
Restored rx418_v7.db
Restored rx4_v7.db
Restored rx6022sas_v7.db
Restored rxd1215sas_v7.db
Restored rxd1219sas_v7.db

Restore successful.

Restoring internal drive's state
�[0;33mWD140EFFX-68VBXN0�[0m
  sata3 DSM partition:  in_sync
  sata3 Swap partition: in_sync
�[0;33mWD140EFFX-68VBXN0�[0m
  sata4 DSM partition:  in_sync
  sata4 Swap partition: in_sync

This was referenced Aug 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants