-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.xml
1826 lines (1598 loc) · 140 KB
/
index.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>tactical-documentation</title>
<link>https://tactical-documentation.github.io/</link>
<description>Recent content on tactical-documentation</description>
<generator>Hugo 0.58.3 -- gohugo.io</generator>
<language>en-us</language>
<lastBuildDate>Sun, 06 Oct 2019 14:09:45 +0000</lastBuildDate>
<atom:link href="https://tactical-documentation.github.io/index.xml" rel="self" type="application/rss+xml" />
<item>
<title>Encrypting Proxmox VE 6: ZFS, LUKS, systemd-boot and Dropbear</title>
<link>https://tactical-documentation.github.io/post/proxmoxve6-zfs-luks-systemdboot-dropbear/</link>
<pubDate>Fri, 23 Aug 2019 00:00:00 +0000</pubDate>
<guid>https://tactical-documentation.github.io/post/proxmoxve6-zfs-luks-systemdboot-dropbear/</guid>
<description>
<p>This describes how to set up a fully encrypted Proxmox VE 6 host
with ZFS root and unlocking it remotely using the dropbear ssh
server. Also it describes how you can do that, while keeping
systemd-boot and thus also the pve tooling intact<label class="margin-toggle sidenote-number"></label><span class="sidenote"> I&rsquo;m not sure if the pve tooling still works if you replace systemd-boot with grub, which seems to be the common solution to setting up this kind of setup,maybe it does </span>.</p>
<p>Update: This post has been translated into czech language and was
published on <a href="https://www.abclinuxu.cz/clanky/sifrovany-proxmox-ve-6-zfs-luks-systemd-boot-a-dropbear">abclinuxu.cz</a>.</p>
<h2 id="overview">Overview</h2>
<p>We are going to do the following:</p>
<ol>
<li>Install Proxmox VE 6 on our machine</li>
<li>Minimally configure the Installation</li>
<li>Encrypt the Installation:
<ol>
<li>Remove a Disk from the ZFS-Pool</li>
<li>Encrypt the Disk with LUKS</li>
<li>Add it back to the ZFS Pool</li>
<li>Repeat until all disks are encrypted</li>
</ol></li>
<li>Set up Dropbear and Systemd-boot to enable remote unlocking</li>
</ol>
<h2 id="prerequisites">Prerequisites</h2>
<p>There really only is one prerequisite apart from having a machine
you want to install Proxmox onto: You need a second harddrive,
which we will setup in a ZFS RAID1 configuration. If you don&rsquo;t
want to have your root devices mirrored, you will still need a
second drive that you can use as a temporary mirrored root device,
otherwise you&rsquo;d have to install and set up an encrypted debian and
then install proxmox on top of that.</p>
<p>Apart from that I&rsquo;ll assume that you are probably fairly familiar
with how full disk encryption works on linux systems, if not you
might want to read up on that before you start messing around with
any hardware. Please don&rsquo;t try this out on a production system,
if you don&rsquo;t exactly know what you&rsquo;re doing.</p>
<h2 id="installing-proxmox-ve-6">Installing Proxmox VE 6</h2>
<p>The only thing you have to make sure is to set up the ZFS RAID 1
during the installation. The rest should be pretty much
straight-forward.</p>
<h2 id="minimal-post-installation">Minimal post-installation</h2>
<p>For some odd reason <code>PATH</code> in a regular shell is different from <code>PATH</code>
in the javascript terminal from the webinterface. You might want
to take care of that:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#366">echo</span> <span style="color:#c30">&#34;export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin&#34;</span> &gt;&gt; ~/.bashrc</code></pre></div>
<p>Remove the subscription popup notice (<a href="https://johnscs.com/remove-proxmox51-subscription-notice/">source</a>):</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">sed -i.bak <span style="color:#c30">&#34;s/data.status !== &#39;Active&#39;/false/g&#34;</span> /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js <span style="color:#555">&amp;&amp;</span> systemctl restart pveproxy.service</code></pre></div>
<p>Set up the community repositories:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">rm /etc/apt/sources.list.d/pve-enterprise.list
<span style="color:#366">echo</span> <span style="color:#c30">&#39;deb http://download.proxmox.com/debian/pve buster pve-no-subscription&#39;</span> &gt; pve-community.list</code></pre></div>
<p>Update the host:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">apt update
apt upgrade</code></pre></div>
<h2 id="encrypt-your-installation">Encrypt your installation</h2>
<p>This is partly taken over from <a href="https://forums.servethehome.com/index.php?threads/proxmox-zfs-encryption-guide-work-in-progress.23004/#post-215138">this wonderful post</a><label class="margin-toggle sidenote-number"></label><span class="sidenote"> The GRUB_ENABLE_CRYPTODISK option that is mentioned in the <a href="https://forums.servethehome.com/index.php?threads/proxmox-zfs-encryption-guide-work-in-progress.23004/#post-215138">forum post</a> does not apply here, since the boot partition is not encrypted. If you want this level of security, then this is probably not the right guide for you. Also from my understanding encrypting the boot partition means that you can&rsquo;t use dropbear to unlock the system remotely since nothing has booted so far. It is a pretty nice way to set up fully encrypted laptops though, so you should definitely look into this if you haven&rsquo;t already! </span>.</p>
<p>Right after the installation the host should look similiar to this
(<code>lsblk</code>):</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 465.3G 0 part
sdb 8:16 0 931.5G 0 disk
sdc 8:32 0 931.5G 0 disk
sdd 8:48 0 465.8G 0 disk
├─sdd1 8:49 0 1007K 0 part
├─sdd2 8:50 0 512M 0 part
└─sdd3 8:51 0 465.3G 0 part</code></pre></div>
<p>The third partition of both harddrives contains our installation,
the first and second are the boot and efi partitions.</p>
<p><code>zpool status</code> should return something like this:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-Samsung_SSD_850_EVO_500GB_XXXXXXXXXXXXXXX-part3 ONLINE 0 0 0
ata-WDC_WDS500G2B0A-XXXXXX_XXXXXXXXXXXX-part3 ONLINE 0 0 0</code></pre></div>
<p>You might want to install <code>cryptsetup</code> at this point:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">apt install cryptsetup</code></pre></div>
<p>Remove the first partition from <code>rpool</code>, then encrypt it, mount it
to <code>/dev/mapper/cryptrpool1</code> and reattach it to <code>rpool</code>:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">zpool detach rpool ata-Samsung_SSD_850_EVO_500GB_XXXXXXXXXXXXXXX-part3
cryptsetup luksFormat /dev/disk/by-id/ata-Samsung_SSD_850_EVO_500GB_XXXXXXXXXXXXXXX-part3
cryptsetup luksOpen /dev/disk/by-id/ata-Samsung_SSD_850_EVO_500GB_XXXXXXXXXXXXXXX-part3 cryptrpool1
zpool attach rpool ata-WDC_WDS500G2B0A-XXXXXX_XXXXXXXXXXXX-part3 cryptrpool1</code></pre></div>
<p>Wait until the <code>scan</code> line of <code>zpool status</code> displays that the drive
has been resilvered successfully. You should see something
similiar to this:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">scan: resilvered 1022M in 0 days 00:00:04 with 0 errors on Wed Aug 21 17:27:55 2019</code></pre></div>
<p>Now repeat this step with the other drive:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">zpool detach rpool ata-WDC_WDS500G2B0A-XXXXXX_XXXXXXXXXXXX-part3
cryptsetup luksFormat /dev/disk/by-id/ata-WDC_WDS500G2B0A-XXXXXX_XXXXXXXXXXXX-part3
cryptsetup luksOpen /dev/disk/by-id/ata-WDC_WDS500G2B0A-XXXXXX_XXXXXXXXXXXX-part3 cryptrpool2
zpool attach rpool cryptrpool1 cryptrpool2</code></pre></div>
<p>At this point <code>lsblk</code> should output something like this:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 465.3G 0 part
└─cryptrpool1 253:0 0 465.3G 0 crypt
sdb 8:16 0 931.5G 0 disk
sdc 8:32 0 931.5G 0 disk
sdd 8:48 0 465.8G 0 disk
├─sdd1 8:49 0 1007K 0 part
├─sdd2 8:50 0 512M 0 part
└─sdd3 8:51 0 465.3G 0 part
└─cryptrpool2 253:1 0 465.3G 0 crypt</code></pre></div>
<p>And <code>zpool status</code> should return something like this:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
cryptrpool1 ONLINE 0 0 0
cryptrpool2 ONLINE 0 0 0</code></pre></div>
<p>Next we want to set up <code>/etc/crypttab</code>, use <code>blkid</code> to get the
<code>PARTUUID</code> from both harddrives:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">blkid -s PARTUUID -o value /dev/disk/by-id/ata-Samsung_SSD_850_EVO_500GB_XXXXXXXXXXXXXXX-part3
blkid -s PARTUUID -o value /dev/disk/by-id/ata-WDC_WDS500G2B0A-XXXXXX_XXXXXXXXXXXX-part3</code></pre></div>
<p>Then add them to <code>/etc/crypttab</code> <label class="margin-toggle sidenote-number"></label><span class="sidenote"> <code>caliban</code> is the name of my proxmox host. </span>:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">root@caliban:~# cat /etc/crypttab
# &lt;target name&gt; &lt;source device&gt; &lt;key file&gt; &lt;options&gt;
cryptrpool1 PARTUUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX none luks,discard,initramfs
cryptrpool2 PARTUUID=YYYYYYYY-YYYY-YYYY-YYYY-YYYYYYYYYYYY none luks,discard,initramfs</code></pre></div>
<p>Then update the initramfs and make sure it is put on the boot
partition (this is where we deviate from the forum post I&rsquo;ve linked
above):</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">update-initramfs -u -k all
pve-efiboot-tool refresh</code></pre></div>
<p>In case you&rsquo;re wondering at this point, yes I&rsquo;m also getting the
<code>cryptsetup</code> error message on running <code>update-initramfs</code>, it still works
though:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">cryptsetup: ERROR: Couldn&#39;t resolve device rpool/ROOT/pve-1
cryptsetup: WARNING: Couldn&#39;t determine root device</code></pre></div>
<p>Now you should be able to reboot and unlock the ZFS partitions by
entering the passphrase.</p>
<h2 id="setting-up-dropbear-to-remotely-unlock-the-partition">Setting up Dropbear to remotely unlock the partition</h2>
<p>Now to the fun part! Since we aren&rsquo;t using <code>grub</code> here, we have to take
a few different steps from what we usually do in this kind of
setup.</p>
<p>Here are a few interesting links you might want to look into as well:</p>
<ul>
<li><a href="https://www.pbworks.net/ubuntu-guide-dropbear-ssh-server-to-unlock-luks-encrypted-pc/">This</a> nicely explains how to use the keys Dropbear already generates on
install instead of recreating them.</li>
<li>The freedesktop page on <a href="https://www.freedesktop.org/wiki/Software/systemd/systemd-boot/">systemd-boot</a></li>
<li><a href="https://adfinis-sygroup.ch/en/blog/decrypt-luks-devices-remotely-via-dropbear-ssh/">This little article</a> on setting up <code>archlinux</code> with <code>dropbear</code> does
not fully apply to our Proxmox case, but it gives enough
information on how we can tell <code>systemd-boot</code> to tell the kernel
to start with the options we want<label class="margin-toggle sidenote-number"></label><span class="sidenote"> unlike the article states, we need to use the udev name for assigning the IP and I was getting error messages, when supplying nameserver IPs </span>.</li>
</ul>
<p>First install <code>dropbear</code> and <code>busybox</code>:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">apt install dropbear busybox</code></pre></div>
<p>In <code>/etc/initramfs-tools/initramfs.conf</code> enable busybox:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">root@caliban:~# cat /etc/initramfs-tools/initramfs.conf | grep ^BUSYBOX
BUSYBOX=y</code></pre></div>
<p>Then convert the dropbear keys:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#366">cd</span> /etc/dropbear-initramfs/
/usr/lib/dropbear/dropbearconvert dropbear openssh dropbear_rsa_host_key id_rsa
dropbearkey -y -f dropbear_rsa_host_key | grep <span style="color:#c30">&#34;^ssh-rsa &#34;</span> &gt; id_rsa.pub</code></pre></div>
<p>And add your public key to the authorized keys:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">vi /etc/dropbear-initramfs/authorized_keys</code></pre></div>
<p>Make sure <code>dropbear</code> starts by toggling the <code>NO_START</code> value in
<code>/etc/default/dropbear</code>.</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">root@caliban:~# cat /etc/default/dropbear | grep ^NO_START
NO_START=0</code></pre></div>
<p>Finally configure <code>dropbear</code> to use a different Port than 22 in order to
avoid getting the MITM warning, by changing the <code>DROPBEAR_OPTIONS</code> value
in /etc/dropbear-initramfs/config:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">root@caliban:~# cat /etc/dropbear-initramfs/config | grep ^DROPBEAR_OPTIONS
DROPBEAR_OPTIONS=&#34;-p 12345&#34;</code></pre></div>
<p>You can then set up two entries in your <code>~/.ssh/config</code>:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">$ cat ~/.ssh/config
Host *
ServerAliveInterval 120
Host unlock_caliban
Hostname 1.2.3.4
User root
Port 12345
Host caliban
Hostname 1.2.3.4
Port 22</code></pre></div>
<p>At this point I noticed, that only the third partition of both of the
harddrives with the rpool were mounted. When mounting a boot
partition, I found that there were systemd-boot configuration files,
but they seemed to be autogenerated by Proxmox, whenever
<code>pve-efiboot-tool refresh</code> was run. So I looked into
<code>/usr/sbin/pve-efiboot-tool</code>, and followed the code until I came out in
<code>/etc/kernel/postinst.d/zz-pve-efiboot</code>, which contains the code that
generates the systemd-boot configuration files:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#09f;font-style:italic"># [...]</span>
<span style="color:#069;font-weight:bold">for</span> kver in <span style="color:#a00">${</span><span style="color:#033">BOOT_KVERS</span><span style="color:#a00">}</span>; <span style="color:#069;font-weight:bold">do</span>
<span style="color:#033">linux_image</span><span style="color:#555">=</span><span style="color:#c30">&#34;/boot/vmlinuz-</span><span style="color:#a00">${</span><span style="color:#033">kver</span><span style="color:#a00">}</span><span style="color:#c30">&#34;</span>
<span style="color:#033">initrd</span><span style="color:#555">=</span><span style="color:#c30">&#34;/boot/initrd.img-</span><span style="color:#a00">${</span><span style="color:#033">kver</span><span style="color:#a00">}</span><span style="color:#c30">&#34;</span>
<span style="color:#069;font-weight:bold">if</span> <span style="color:#555">[</span> ! -f <span style="color:#c30">&#34;</span><span style="color:#a00">${</span><span style="color:#033">linux_image</span><span style="color:#a00">}</span><span style="color:#c30">&#34;</span> <span style="color:#555">]</span>; <span style="color:#069;font-weight:bold">then</span>
warn <span style="color:#c30">&#34;No linux-image </span><span style="color:#a00">${</span><span style="color:#033">linux_image</span><span style="color:#a00">}</span><span style="color:#c30"> found - skipping&#34;</span>
<span style="color:#069;font-weight:bold">continue</span>
<span style="color:#069;font-weight:bold">fi</span>
<span style="color:#069;font-weight:bold">if</span> <span style="color:#555">[</span> ! -f <span style="color:#c30">&#34;</span><span style="color:#a00">${</span><span style="color:#033">initrd</span><span style="color:#a00">}</span><span style="color:#c30">&#34;</span> <span style="color:#555">]</span>; <span style="color:#069;font-weight:bold">then</span>
warn <span style="color:#c30">&#34;No initrd-image </span><span style="color:#a00">${</span><span style="color:#033">initrd</span><span style="color:#a00">}</span><span style="color:#c30"> found - skipping&#34;</span>
<span style="color:#069;font-weight:bold">continue</span>
<span style="color:#069;font-weight:bold">fi</span>
warn <span style="color:#c30">&#34; Copying kernel and creating boot-entry for </span><span style="color:#a00">${</span><span style="color:#033">kver</span><span style="color:#a00">}</span><span style="color:#c30">&#34;</span>
<span style="color:#033">KERNEL_ESP_DIR</span><span style="color:#555">=</span><span style="color:#c30">&#34;</span><span style="color:#a00">${</span><span style="color:#033">PMX_ESP_DIR</span><span style="color:#a00">}</span><span style="color:#c30">/</span><span style="color:#a00">${</span><span style="color:#033">kver</span><span style="color:#a00">}</span><span style="color:#c30">&#34;</span>
<span style="color:#033">KERNEL_LIVE_DIR</span><span style="color:#555">=</span><span style="color:#c30">&#34;</span><span style="color:#a00">${</span><span style="color:#033">esp</span><span style="color:#a00">}</span><span style="color:#c30">/</span><span style="color:#a00">${</span><span style="color:#033">KERNEL_ESP_DIR</span><span style="color:#a00">}</span><span style="color:#c30">&#34;</span>
mkdir -p <span style="color:#c30">&#34;</span><span style="color:#a00">${</span><span style="color:#033">KERNEL_LIVE_DIR</span><span style="color:#a00">}</span><span style="color:#c30">&#34;</span>
cp -u --preserve<span style="color:#555">=</span>timestamps <span style="color:#c30">&#34;</span><span style="color:#a00">${</span><span style="color:#033">linux_image</span><span style="color:#a00">}</span><span style="color:#c30">&#34;</span> <span style="color:#c30">&#34;</span><span style="color:#a00">${</span><span style="color:#033">KERNEL_LIVE_DIR</span><span style="color:#a00">}</span><span style="color:#c30">/&#34;</span>
cp -u --preserve<span style="color:#555">=</span>timestamps <span style="color:#c30">&#34;</span><span style="color:#a00">${</span><span style="color:#033">initrd</span><span style="color:#a00">}</span><span style="color:#c30">&#34;</span> <span style="color:#c30">&#34;</span><span style="color:#a00">${</span><span style="color:#033">KERNEL_LIVE_DIR</span><span style="color:#a00">}</span><span style="color:#c30">/&#34;</span>
<span style="color:#09f;font-style:italic"># create loader entry</span>
cat &gt; <span style="color:#c30">&#34;</span><span style="color:#a00">${</span><span style="color:#033">esp</span><span style="color:#a00">}</span><span style="color:#c30">/loader/entries/proxmox-</span><span style="color:#a00">${</span><span style="color:#033">kver</span><span style="color:#a00">}</span><span style="color:#c30">.conf&#34;</span> <span style="color:#c30">&lt;&lt;- EOF
</span><span style="color:#c30"> title ${LOADER_TITLE}
</span><span style="color:#c30"> version ${kver}
</span><span style="color:#c30"> options ${CMDLINE}
</span><span style="color:#c30"> linux /${KERNEL_ESP_DIR}/vmlinuz-${kver}
</span><span style="color:#c30"> initrd /${KERNEL_ESP_DIR}/initrd.img-${kver}
</span><span style="color:#c30"> EOF</span>
<span style="color:#069;font-weight:bold">done</span>
<span style="color:#09f;font-style:italic"># [...]</span></code></pre></div>
<p>For us, the cat part is especially interesting: the <code>CMDLINE</code> variable
in the line beginning with &ldquo;<code>options</code>&rdquo; contains the boot options for the
Linux kernel. This variable is assigned in the same file:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#09f;font-style:italic"># [...]</span>
<span style="color:#069;font-weight:bold">if</span> <span style="color:#555">[</span> -f /etc/kernel/cmdline <span style="color:#555">]</span>; <span style="color:#069;font-weight:bold">then</span>
<span style="color:#033">CMDLINE</span><span style="color:#555">=</span><span style="color:#c30">&#34;</span><span style="color:#069;font-weight:bold">$(</span>cat /etc/kernel/cmdline<span style="color:#069;font-weight:bold">)</span><span style="color:#c30">&#34;</span>
<span style="color:#069;font-weight:bold">else</span>
warn <span style="color:#c30">&#34;No /etc/kernel/cmdline found - falling back to /proc/cmdline&#34;</span>
<span style="color:#033">CMDLINE</span><span style="color:#555">=</span><span style="color:#c30">&#34;</span><span style="color:#069;font-weight:bold">$(</span>cat /proc/cmdline<span style="color:#069;font-weight:bold">)</span><span style="color:#c30">&#34;</span>
<span style="color:#069;font-weight:bold">fi</span>
<span style="color:#09f;font-style:italic"># [...]</span></code></pre></div>
<p>Apparently <code>/etc/kernel/cmdline</code> is the place where Proxmox stores it&rsquo;s
boot options. The file contains one single line:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">root=ZFS=rpool/ROOT/pve-1 boot=zfs</code></pre></div>
<p>After finding the <code>/etc/kernel/cmdline</code> file, I did a bit of searching
and according to the Proxmox <a href="https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysboot%5Fedit%5Fkernel%5Fcmdline">documentation</a>, it is actually the
apropriate file to change in this case.</p>
<p>Now that we have identified the file we can use to configure our
kernel options, there are two things we want to add:</p>
<ol>
<li><p>we want to make sure the network interface comes up so that we can
ssh into the initramfs, we will use the <code>ip</code> option for that. It uses
the following format (look <a href="https://www.kernel.org/doc/Documentation/filesystems/nfs/nfsroot.txt">here</a> for further reading):</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text"> ip=&lt;client-ip&gt;:&lt;server-ip&gt;:&lt;gw-ip&gt;:&lt;netmask&gt;:&lt;hostname&gt;:&lt;device&gt;:&lt;autoconf&gt;:
&lt;dns0-ip&gt;:&lt;dns1-ip&gt;:&lt;ntp0-ip&gt;:</code></pre></div>
<p>I omitted everything after autoconf, something like this works for
me:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">ip=1.2.3.4::1.2.3.1:255.255.255.0:caliban:enpXsY:none:</code></pre></div></li>
<li><p>also we have to tell the kernel which devices the cryptodevices are
that we want to unlock, which is done using the <code>cryptodevice</code> option
(here we have to supply the PARTUUIDs for both of our harddrives):</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">cryptdevice=UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX cryptdevice=UUID=YYYYYYYY-YYYY-YYYY-YYYY-YYYYYYYYYYYY</code></pre></div></li>
</ol>
<p>The whole content of <code>/etc/kernel/cmdline</code> looks like this:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">ip=1.2.3.4::1.2.3.1:255.255.255.0:caliban:enpXsY:none: cryptdevice=UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX cryptdevice=UUID=YYYYYYYY-YYYY-YYYY-YYYY-YYYYYYYYYYYY root=ZFS=rpool/ROOT/pve-1 boot=zfs</code></pre></div>
<p>The last thing to do is to:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">update-initramfs -u -k all
pve-efiboot-tool refresh</code></pre></div>
<p>Now you should be able to reboot your machine and ssh into the
busybox on the port you just configured for <code>dropbear</code>. From there
you can unlock the drives by running something like
this<label class="margin-toggle sidenote-number"></label><span class="sidenote"> You&rsquo;ll have to input it twice since you have two encrypted drives </span>:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">echo -n &#34;password&#34; &gt; /lib/cryptsetup/passfifo</code></pre></div>
<p>Or:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">/lib/cryptsetup/askpass &#34;password: &#34; &gt; /lib/cryptsetup/passfifo</code></pre></div>
<p>Or you can also use the <code>cryptroot-unlock</code> script that is preinstalled
already, which also prompts you to enter the password twice.</p>
<p>If you&rsquo;re lazy, you can also use put the following script into
<code>/etc/initramfs-tools/hooks</code> and make it executable. I basically
merged the above example of using <code>/lib/cryptsetup/askpass</code> with a
version of a unlock script I had lying around, it looks like it
might have been from this <a href="https://gist.github.com/gusennan/712d6e81f5cf9489bd9f">gist</a>. It asks you for a passphrase and
then uses echo to write it into <code>/lib/cryptsetup/passfifo</code> twice
(since I use 2 harddrives) with one second delay in between, then
kills the session so the system can come up<label class="margin-toggle sidenote-number"></label><span class="sidenote"> I noticed, that /etc/motd, which contains instructions on how to unlock your drive is not displayed in the busybox session. </span>. You probably shouldn&rsquo;t use it, but it seems to work
for me:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash"><span style="color:#099">#!/bin/sh
</span><span style="color:#099"></span>
<span style="color:#033">PREREQ</span><span style="color:#555">=</span><span style="color:#c30">&#34;dropbear&#34;</span>
prereqs<span style="color:#555">()</span> <span style="color:#555">{</span>
<span style="color:#366">echo</span> <span style="color:#c30">&#34;</span><span style="color:#033">$PREREQ</span><span style="color:#c30">&#34;</span>
<span style="color:#555">}</span>
<span style="color:#069;font-weight:bold">case</span> <span style="color:#c30">&#34;</span><span style="color:#033">$1</span><span style="color:#c30">&#34;</span> in
prereqs<span style="color:#555">)</span>
prereqs
<span style="color:#366">exit</span> <span style="color:#f60">0</span>
;;
<span style="color:#069;font-weight:bold">esac</span>
. <span style="color:#c30">&#34;</span><span style="color:#a00">${</span><span style="color:#033">CONFDIR</span><span style="color:#a00">}</span><span style="color:#c30">/initramfs.conf&#34;</span>
. /usr/share/initramfs-tools/hook-functions
<span style="color:#069;font-weight:bold">if</span> <span style="color:#555">[</span> <span style="color:#c30">&#34;</span><span style="color:#a00">${</span><span style="color:#033">DROPBEAR</span><span style="color:#a00">}</span><span style="color:#c30">&#34;</span> !<span style="color:#555">=</span> <span style="color:#c30">&#34;n&#34;</span> <span style="color:#555">]</span> <span style="color:#555">&amp;&amp;</span> <span style="color:#555">[</span> -r <span style="color:#c30">&#34;/etc/crypttab&#34;</span> <span style="color:#555">]</span> ; <span style="color:#069;font-weight:bold">then</span>
cat &gt; <span style="color:#c30">&#34;</span><span style="color:#a00">${</span><span style="color:#033">DESTDIR</span><span style="color:#a00">}</span><span style="color:#c30">/bin/unlock&#34;</span> <span style="color:#c30">&lt;&lt; EOF
</span><span style="color:#c30">#!/bin/sh
</span><span style="color:#c30">unlock_devices() {
</span><span style="color:#c30"> pw=&#34;\$(/lib/cryptsetup/askpass &#34;password: &#34;)&#34;
</span><span style="color:#c30"> echo -n \$pw &gt; /lib/cryptsetup/passfifo
</span><span style="color:#c30"> sleep 1
</span><span style="color:#c30"> echo -n \$pw &gt; /lib/cryptsetup/passfifo
</span><span style="color:#c30">}
</span><span style="color:#c30">if unlock_devices; then
</span><span style="color:#c30"># kill \`ps | grep cryptroot | grep -v &#34;grep&#34; | awk &#39;{print \$1}&#39;\`
</span><span style="color:#c30"># following line kill the remote shell right after the passphrase has
</span><span style="color:#c30"># been entered.
</span><span style="color:#c30">kill -9 \`ps | grep &#34;\-sh&#34; | grep -v &#34;grep&#34; | awk &#39;{print \$1}&#39;\`
</span><span style="color:#c30">exit 0
</span><span style="color:#c30">fi
</span><span style="color:#c30">exit 1
</span><span style="color:#c30">EOF</span>
chmod <span style="color:#f60">755</span> <span style="color:#c30">&#34;</span><span style="color:#a00">${</span><span style="color:#033">DESTDIR</span><span style="color:#a00">}</span><span style="color:#c30">/bin/unlock&#34;</span>
mkdir -p <span style="color:#c30">&#34;</span><span style="color:#a00">${</span><span style="color:#033">DESTDIR</span><span style="color:#a00">}</span><span style="color:#c30">/lib/unlock&#34;</span>
cat &gt; <span style="color:#c30">&#34;</span><span style="color:#a00">${</span><span style="color:#033">DESTDIR</span><span style="color:#a00">}</span><span style="color:#c30">/lib/unlock/plymouth&#34;</span> <span style="color:#c30">&lt;&lt; EOF
</span><span style="color:#c30">#!/bin/sh
</span><span style="color:#c30">[ &#34;\$1&#34; == &#34;--ping&#34; ] &amp;&amp; exit 1
</span><span style="color:#c30">/bin/plymouth &#34;\$@&#34;
</span><span style="color:#c30">EOF</span>
chmod <span style="color:#f60">755</span> <span style="color:#c30">&#34;</span><span style="color:#a00">${</span><span style="color:#033">DESTDIR</span><span style="color:#a00">}</span><span style="color:#c30">/lib/unlock/plymouth&#34;</span>
<span style="color:#366">echo</span> To unlock root-partition run <span style="color:#c30">&#34;unlock&#34;</span> &gt;&gt; <span style="color:#a00">${</span><span style="color:#033">DESTDIR</span><span style="color:#a00">}</span>/etc/motd
<span style="color:#069;font-weight:bold">fi</span></code></pre></div>
<p>That&rsquo;s pretty much all of it, you can now start enjoying remote
reboots on your freshly encrypted Proxmox host.</p>
</description>
</item>
<item>
<title>Proof of Concept: Adding Boot Environments to Proxmox VE 6</title>
<link>https://tactical-documentation.github.io/post/poc-proxmox-and-boot-environments/</link>
<pubDate>Wed, 28 Aug 2019 00:00:00 +0000</pubDate>
<guid>https://tactical-documentation.github.io/post/poc-proxmox-and-boot-environments/</guid>
<description>
<p>Dear Reader, this time I would like to invite you onto a small
journey: To boldly go where no man has gone
before<label class="margin-toggle sidenote-number"></label><span class="sidenote"> Alright, that&rsquo;s not true, but I think it&rsquo;s the first time someone documents this kind of thing in the context of Proxmox </span>. We&rsquo;re about to embark on a journey to make your Proxmox
host quite literally immortal. Also since what we are essentially
doing here is only a Proof of concept, you probably shouldn&rsquo;t use it
in production, but as it&rsquo;s really amazing, so you might want to try
it out in a test environment.</p>
<p>In this article we are going to take a closer look at how Proxmox
sets up the <code>ESP</code> <label class="margin-toggle sidenote-number"></label><span class="sidenote"> EFI Systems Partition </span> for
<code>systemd-boot</code> and how we can adapt this process to support <code>boot
environments</code>. Also this is going to be a long one, so you might want
to grab a cup of coffee and some snacks to eat </label><span class="marginnote"> And maybe start installing a Proxmox VE 6 VM with ZFS, because if boot environments are still new to you, at the point when you&rsquo;ve read about halfway through this post, you will be eager to get your hands dirty and try this out for yourself </span>.</p>
<h2 id="overview">Overview</h2>
<ul>
<li>What are Boot Environments?
<ul>
<li>Boot Environments on Linux</li>
</ul></li>
<li>Poking around in Proxmox
<ul>
<li>The Proxmox ZFS Layout</li>
<li>The Boot Preparation</li>
<li>A Simple Proof of Concept</li>
</ul></li>
<li>From one Proof of Concept to Another
<ul>
<li>Sidenote: The Proxmox ESP Size</li>
<li><code>zedenv</code>: A Boot Environment Manager</li>
<li><code>systemd-boot</code> and the EFI System Partitions</li>
<li>Making <code>zedenv</code> and Proxmox play well together</li>
</ul></li>
<li>Conclusion and Future Work</li>
</ul>
<h2 id="what-are-boot-environments">What are Boot Environments?</h2>
<p>Boot environments are a truly amazing feature, which originated
somewhere in the Solaris/Illumos ecosystem<label class="margin-toggle sidenote-number"></label><span class="sidenote"> They have literally been around for ages, I&rsquo;m not quite sure at which point in time they were introduced, but you can finde evidence <a href="https://books.google.com/books?id=8vrwjLsPkgwC&amp;pg=PA109">at archeological digsites</a> dating them back to at least 2003. </span> and
has since been adapted by other operating systems, such as FreeBSD,
DragonflyBSD and others. The concept is actually quite simple:</p>
<blockquote>
<p>A boot environment is a bootable Oracle Solaris environment
consisting of a root dataset and, optionally, other datasets
mounted underneath it. Exactly one boot environment can be active
at a time.</p>
<ul>
<li><a href="https://docs.oracle.com/cd/E23824%5F01/html/E21801/index.html">Oracle Solaris 11 Information Library</a></li>
</ul>
</blockquote>
<p>In my own words, I would describe boot environments as snapshots of
a [partial] system, which can booted from (that is when a Boot
Environment is active) or be mounted at runtime (by the same
system).</p>
<p>This enables a bunch of very interesting use-cases:</p>
<ul>
<li>Rollbacks: This might not seem to be a pretty back deal at first,
but once you realize that even after a major OS version upgrade,
when something is suddenly broken, the previous version is just a
reboot away.</li>
<li>You can create bootable system snapshots on your bare metal
machines, not only on your virtual machines.</li>
<li>You can choose between creating a new boot environment to save
the current systems state before updating or create a new boot
environment, chroot into it, upgrade and reboot into a freshly
upgraded system.</li>
<li>You can quite literally take your work home if you like, by
creating a boot environment and *<strong><em>drum-roll</em></strong>* taking it home. You
you can of course also use this in order to create a virtual
machine, container, jail or zone of your system in order to test
something new or for forensic purposes.</li>
</ul>
<p>Are you hooked yet? Good, you really should be. If you&rsquo;re not
hooked, read till the end of the next section, you will
be. </label><span class="marginnote"> If you&rsquo;re interested in boot environments, I would suggest, you take a look at vermadens <a href="https://vermaden.files.wordpress.com/2018/07/pbug-zfs-boot-environments-2018-07-30.pdf">presentation on ZFS boot environments</a>, or generally searching a bit on the web for articles about Boot Environments on other unix systems, particularly there is quite a bit to be read on FreeBSD, which recently adopted them and which is far more in depth and better explained than what I&rsquo;ll probably write down here. </span></p>
<h3 id="boot-environments-on-linux">Boot Environments on Linux</h3>
<p>While other operating systems have happily adapted boot
environments, there is surprisingly<label class="margin-toggle sidenote-number"></label><span class="sidenote"> Or maybe not so surprisingly, if you remember how long zones and jails have been a thing, while linux just recently started doing containers. At least there&rsquo;s still Windows to compare with. </span> apparently not
too much going on in the linux world. The focus here seems to be
more on containerizing applications in order to isolate them from
the rest of the host system rather than to make the host system
itself more solid (which is also great, but not the same).</p>
<p>On linux there are presently - at least to my knowledge - only the
following projects that aim in a similiar direction:</p>
<ul>
<li>There is <a href="https://en.opensuse.org/openSUSE:Snapper%5FTutorial">snapper</a> for <code>btrfs</code>, which seems to be a quite Suse
specific solution. However according to it&rsquo;s documentation:
<a href="https://www.suse.com/documentation/sles-15/book%5Fsle%5Fadmin/data/sec%5Fsnapper%5Fsnapshot-boot.html#sec%5Fsnapper%5Fsnapshot-boot%5Flimits">&ldquo;A
complete system rollback, restoring the complete system to the
identical state as it was in when a snapshot was taken, is not
possible.&rdquo;</a> This, at least without more explanation or context
sounds quite a bit spooky.</li>
<li>There is a <a href="https://github.com/b333z/beadm">Linux port</a> of the FreeBSD beadm tool, which hasn&rsquo;t
been updated in ~3 years, while beadm has. It does not seem to
be maintained any more and to be tailored to a single gentoo
installation.</li>
<li>There are a few <code>btrfs</code> specific scripts by a company called
<a href="https://github.com/PluribusNetworks/pluribus%5Flinux%5Fuserland/tree/master/components/bootenv-tools/bootenv-tools-src">Pluribus Networks</a>, which seem to have implemented their own
version of <code>beadm</code> on top of <code>btrfs</code>. This apparently runs on some
network devices.</li>
<li><a href="https://nixos.org/">NixOS</a> does something similiar to boot environments with their
atomic update and rollback feature, but as far as I&rsquo;ve
understood this is still different from boot environments. Being
functional, they don&rsquo;t exactly roll back to a old version of the
system based on a filesystem snapshot, but rather recreate an
identical environment to a previous one.</li>
<li>And finally there is <a href="https://github.com/johnramsden/zedenv">zedenv</a>, a boot environment manager that is
written in python, supports both Linux and Freebsd and works
really nice. It&rsquo;s also the one that I&rsquo;ve used before. It is also
what we are going to use here, since there really isn&rsquo;t an
alternative when it comes to linux and ZFS.</li>
</ul>
<h2 id="poking-around-in-proxmox">Poking around in Proxmox</h2>
<p>But before we start grabbing a copy of <code>zedenv</code>, we have to take a
closer look into Proxmox itself in order to look at what we may
have to adapt.</p>
<p>Basically we already know that it is generally possible to use boot
environments with ZFS and linux, so what we want is hopefully not
exactly rocket science.</p>
<p>What we are going to check is:</p>
<ol>
<li>How do we have to adapt the Proxmox VE 6 rpool?</li>
<li>How does Proxmox prepare the boot process and what do we have to
tweak to make it boot into a boot environment?</li>
</ol>
<h3 id="the-proxmox-zfs-layout">The Proxmox ZFS layout</h3>
<p>In this part we are going to take a look at how the ZFS layout is
set up by the proxmox installer. This is because there&rsquo;s a few
things we have to consider when we use boot environments with
Proxmox:</p>
<ol>
<li>We do not ever want to interfere in the operation of our guest
machines: Since we have the ability to snapshot and restore
virtual machines and containers, there is really no benefit to
include them into the snapshots of our boot environments, on
the contrary, we really don&rsquo;t want to end up with guests of our
tenants missing files just because we&rsquo;ve made a rollback.</li>
<li>Is the ZFS layout compatible with running boot environments? Not
all systems with ZFS are automatically compatible with using Boot
Environments, basically if you just mount your ZFS pool as <code>/</code>, it
won&rsquo;t work</li>
<li>Are there any directories we have to exclude from the root
dataset?</li>
</ol>
<p>So lets look at Proxmox:
By default after installing with ZFS root you get a pool called
<code>rpool</code> which is split up into <code>rpool/ROOT</code> as well as <code>rpool/data</code> and
looks similiar to this (<code>zfs list</code>):</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">rpool 4.28G 445G 104K /rpool
rpool/ROOT 2.43G 445G 96K /rpool/ROOT
rpool/ROOT/pve-1 2.43G 445G 2.43G /
rpool/data 1.84G 445G 104K /rpool/data
rpool/data/subvol-101-disk-0 831M 7.19G 831M /rpool/data/subvol-101-disk-0
rpool/data/vm-100-disk-0 1.03G 445G 1.03G -</code></pre></div>
<p><code>rpool/data</code> contains the virtual machines as well as the containers
as you can see in the output of <code>zfs list</code> above. That&rsquo;s great, we
don&rsquo;t have to manually move them. This takes care of the second
point of our checklist from above.</p>
<p>Also <code>rpool/ROOT/pve-1</code> is mounted as <code>/</code>, so we have <code>rpool/ROOT</code> which
can potentially hold more than one snapshot of <code>/</code>, that is actually
exactly what we need in order to use boot environments, the
Proxmox team just saved us a bunch of time!</p>
<p>This only leaves the third part of our little checklist open. Which
directories are left that we don&rsquo;t want to snapshot as part of our
boot environments? We can find a pretty important one in this
context by checking <code>/etc/pve/storage.cfg</code>:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">dir: local
path /var/lib/vz
content iso,vztmpl,backup
zfspool: local-zfs
pool rpool/data
sparse
content images,rootdir</code></pre></div>
<p>So while the virtual machines and the containers are part of
<code>rpool/data</code>, iso files, templates and backups are still located in
<code>rpool/root/pve-1</code>. That&rsquo;s not really what we want, imagine rolling
back to a Boot Environment from a week ago and suddenly missing a
weeks worth of Backups, that would be pretty annoying. Iso files
as well as container templates are probably not worth keeping in
our boot environments either.</p>
<p>So lets take <code>/var/lib/vz</code> out of <code>rpool/root/pve-1</code>, first create a
new dataset:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">root@caliban:/var/lib# zfs create -o mountpoint=/var/lib/vz rpool/vz
cannot mount &#39;/var/lib/vz&#39;: directory is not empty</code></pre></div>
<p>Then move over the content of <code>/var/lib/vz</code> into the newly created
and not yet mounted dataset:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">mv /var/lib/vz/ vz.old/ &amp;&amp; zfs mount rpool/vz &amp;&amp; mv vz.old/* /var/lib/vz/ &amp;&amp; rmdir vz.old</code></pre></div>
<p>If you don&rsquo;t have any images, templates or backups yet, or you just
don&rsquo;t particularly care about them, you can of course also just
remove <code>/var/lib/vz/*</code> entirely, mount <code>rpool/vz</code> and recreate the
folder structure:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">root@caliban:~# tree -a /var/lib/vz/
/var/lib/vz/
├── dump
└── template
├── cache
├── iso
└── qemu</code></pre></div>
<p>Ok, now that&rsquo;s out of the way, we should in general be able to make
snapshots, roll them back without disturbing the operation of the
proxmox server too much.</p>
<p><strong>BUT</strong>: this might not apply to your server, since there is still a
lot of other stuff in <code>/var/lib/</code> that you may want to include or
exclude from snapshots! Better be sure to check what&rsquo;s in there.</p>
<p>Also there are some other directories we might want to
exclude. There is for example <code>/tmp</code> as well as <code>/var/tmp/</code> which
shouldn&rsquo;t include anything that is worth keeping, but which of
course would be snapshotted as well, we can create datasets for them
as well and they should be automounted on reboot:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">zfs create -o mountpoint=/tmp rpool/tmp
zfs create -o mountpoint=/var/tmp rpool/var_tmp</code></pre></div>
<p>If you&rsquo;ve users that can connect directly to your Proxmox host,
you might want to exclude <code>/home/</code> as well. <code>/root/</code> might be another
good candidate, you may want to keep all of your shell history
available at all times and regardless of which snapshot you&rsquo;re
currently in. You can also think about whether or not you want to
have your logs, mail and proabably a bunch of other things
included or excluded, I guess both variants have their use cases.</p>
<p>On my system <code>zfs list</code> returns something like this:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">NAME USED AVAIL REFER MOUNTPOINT
rpool 5.25G 444G 104K /rpool
rpool/ROOT 1.34G 444G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.30G 444G 1.16G /
rpool/data 2.59G 444G 104K /rpool/data
rpool/home_root 7.94M 444G 7.94M /root
rpool/tmp 128K 444G 128K /tmp
rpool/var_tmp 136K 444G 136K /var/tmp
rpool/vz 1.30G 444G 1.30G /var/lib/vz</code></pre></div>
<p>At this point we&rsquo;ve made sure that:</p>
<ol>
<li>the Proxmox ZFS layout is indeed compatible with Boot
Environments pretty much out of the box</li>
<li>we moved the directories that might impact day to day operations
out of what we want to snapshot</li>
<li>we also excluded a few more directories, which is optional</li>
</ol>
<h3 id="the-boot-preparation">The Boot Preparation</h3>
<p>So after we&rsquo;ve made sure that our ZFS layout works in this step we
have to take a closer look at how the boot process is prepared in
Proxmox. That is because as you might have noticed Proxmox does
this a bit different from what you might be used to from other
linux systems.</p>
<p>As an example this is what <code>lsblk</code> looks like on my local machine:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">nvme0n1 259:0 0 477G 0 disk
├─nvme0n1p1 259:1 0 2G 0 part /boot/efi
└─nvme0n1p2 259:2 0 475G 0 part
└─crypt 253:0 0 475G 0 crypt
├─system-swap 253:1 0 16G 0 lvm [SWAP]
└─system-root 253:2 0 100G 0 lvm /</code></pre></div>
<p>And this is <code>lsblk</code> on Proxmox:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 465.3G 0 part
└─cryptrpool1 253:0 0 465.3G 0 crypt
sdd 8:48 0 465.8G 0 disk
├─sdd1 8:49 0 1007K 0 part
├─sdd2 8:50 0 512M 0 part
└─sdd3 8:51 0 465.3G 0 part
└─cryptrpool2 253:1 0 465.3G 0 crypt</code></pre></div>
<p>Notice how there is no mounted EFI Systems Partition? That&rsquo;s
because both<label class="margin-toggle sidenote-number"></label><span class="sidenote"> Actually the UUIDs of all used ESP Partitions are stored in /etc/kernel/pve-efiboot-uuids </span> of the
/dev/sdX2 devices, which are involved holding my mirrored <code>proot</code>
pool contain a valid ESP. Also proxmox does not mount these
partitions by default but rather encurages the use of their
<code>pve-efiboot-tool</code>, which then takes care of putting a valid boot
configuration on all involved drives, so you can boot off any of
them.</p>
<p>This is not at all bad design, on the contrary, it is however
noteworthy, because it bit it is different from what other systems
with boot environments are using.</p>
<p>Here is a quick recap on how in Proxmox the boot process is
prepared:</p>
<ol>
<li>Initially something happens that requires an update of the
bootloader configuration (e.g. a new kernel is installed or
you&rsquo;ve just set up an full disk encryption, changed something
in the initramfs)</li>
<li>This leads to <code>/usr/sbin/pve-efiboot-tool refresh</code> being run
(either automated or manually), which at some point executes
<code>/etc/kernel/postinst.d/zz-pve-efiboot</code>, which is the script that
loops over the ESPs (which are defined by their UUID in
<code>/etc/kernel/pve-efiboot-uuids</code>), mounts them and generates the
boot loader configuration on them according to what Proxmox (or
you as the user) has <a href="https://pve.proxmox.com/wiki/Host%5FBootloader">defined as kernel versions to keep</a>. The
bootloader configuration is created for every kernel and
configured with the kernel commandline options from
<code>/etc/kernel/cmdline</code>.</li>
<li>On reboot you can use any harddrive that holds a EFI System
Partition to boot from.</li>
</ol>
<p>Incidentally the <code>/etc/kernel/cmdline</code> file is also the one we
configured in the previous post in order to enable remote
decryption on a fully encrypted Proxmox host. Apart from the the
options we added to it last time, it also contains another very
interesting one:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">root=ZFS=rpool/ROOT/pve-1</code></pre></div>
<h3 id="a-simple-proof-of-concept">A Simple Proof of Concept</h3>
<p>At this point we already have everything we need:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">zfs snapshot rpool/ROOT/pve-1@test
zfs clone rpool/ROOT/pve-1@test rpool/ROOT/pve-2
zfs <span style="color:#366">set</span> <span style="color:#033">mountpoint</span><span style="color:#555">=</span>/ rpool/ROOT/pve-2
sed -i <span style="color:#c30">&#39;s/pve-1/pve-2/&#39;</span> /etc/kernel/cmdline
pve-efiboot-tool refresh
reboot</code></pre></div>
<p>Tadaa!</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">root@caliban:~# mount | grep rpool
rpool/ROOT/pve-2 on / type zfs (rw,relatime,xattr,noacl)rpool on /rpool type zfs (rw,noatime,xattr,noacl)rpool/var_tmp on /var/tmp type zfs (rw,noatime,xattr,noacl)rpool/home_root on /root type zfs (rw,noatime,xattr,noacl)
rpool/tmp on /tmp type zfs (rw,noatime,xattr,noacl)
rpool/vz on /var/lib/vz type zfs (rw,noatime,xattr,noacl)
rpool/ROOT on /rpool/ROOT type zfs (rw,noatime,xattr,noacl)
rpool/data on /rpool/data type zfs (rw,noatime,xattr,noacl)
rpool/data/subvol-101-disk-0 on /rpool/data/subvol-101-disk-0 type zfs (rw,noatime,xattr,posixacl)</code></pre></div>
<p>Congratulations, you&rsquo;ve just created your first boot environment!
If you&rsquo;re not convinced yet, just install something such as <code>htop</code>,
enjoy the colors for a bit and run:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">sed -i &#39;s/pve-2/pve-1/&#39; /etc/kernel/cmdline
pve-efiboot-tool refresh
reboot</code></pre></div>
<p>And finally try to run <code>htop</code> again. Notice how it&rsquo;s not only gone,
in fact it was never even there in the first place, at least in
from the systems point of view! Let that sink in for a moment. You
want this. </label><span class="marginnote"> At this point you might want to take a small break, grab another cup of coffee, lean back and remember this one time, back in the day when you were just getting started with all this operations stuff, it was almost beer o&rsquo;clock and before going home you just wanted to apply this one tiny little update, which of course led to the whole server breaking. Remember how, when you were refilling your coffee cup for the third time this old solaris guru walked by on his way home and he had this mysterious smile on his face. Yeah, he knew you were about to spend half of the night there fixing the issue and reinstalling everything, in fact he probably had a similiar issue at the same day, but then decided to just roll back go home a bit early and take care of it the next day. </span></p>
<h2 id="from-one-proof-of-concept-to-another">From one Proof of Concept to Another</h2>
<p>So at this point we know how to set up a boot environment by hand,
that&rsquo;s nice, but currently we only can hop back and forth between a
single boot environment, which is not cool enough yet.</p>
<p>We basically need some tooling which we can use to make everything
work together nicely.</p>
<p>So in this section we are going to look into tooling as well as
into how we may be able to make Proxmox play well together with the
boot environment manager of our (only) choice <code>zedenv</code>.</p>
<p>Our new objective is to look at what we need to do in order to
enable us to select and start any number of boot environments from
the boot manager.</p>
<h3 id="sidenote-the-proxmox-esp-size">Sidenote: The Proxmox ESP Size</h3>
<p>But first a tiny bit of math: since Proxmox uses systemd-boot, the
kernel and initrd are stored in the EFI Systems Partition, which
in a normal installation is 512MB in size. That should be enogh
for the default case, where Proxmox stores only a hand full of
kernels to boot from.</p>
<p>In our case however we might want to be able to access a higher
number of kernels, so we can travel back in time in order to also
start old boot environments.</p>
<p>A typical pair of kernel and initrd seems to be about 50MB in
size, so we can currently store about 10 different kernels at a
time.</p>
<p>If we want to increase the size of the ESP however we might be
out of luck, since ZFS does not like to shrink, so if you&rsquo;re in
the situation of setting up a fresh Proxmox host, you can just
set up the size of the ZFS partition in the advanced options in
the installer <del>you might want</del> <del>to plug in a USB Stick (with less
storage than your drives) or</del> <del>something similiar and create a
mirrored ZFS RAID1 with this</del> <del>device and the other two drives
which you really want to use for</del> <del>storage</del>. This way the size of
the resulting ZFS partition will be smaller than the drives you
actually use and after the initial installation you can just:</p>
<ol>
<li>Remove the first drive from <code>rpool</code>, delete the ZFS partition,
increase the size of the ESP to whatever you want, recreate a
ZFS partition and readd it to <code>rpool</code></li>
<li>wait until <code>rpool</code> has resilvered the drive</li>
<li>repeat this with your second drive.</li>
</ol>
<h3 id="zedenv-a-boot-environment-manager"><code>zedenv</code>: A Boot Environment Manager</h3>
<p>Now lets install <code>zedenv</code> <label class="margin-toggle sidenote-number"></label><span class="sidenote"> Be sure to read the <a href="https://zedenv.readthedocs.io">documentation</a> at some point in time. Also check out <a href="https://ramsdenj.com">John Ramsdens blog</a>, which contains a bit more info about zedenv, workint linux ZFS configuration and a bunch of other awesome stuff </span>:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">root@caliban:~# apt install git python3-venv
root@caliban:~# mkdir opt
root@caliban:~# cd opt
root@caliban:~# git clone https://github.com/johnramsden/pyzfscmds
root@caliban:~# git clone https://github.com/johnramsden/zedenv
root@caliban:~/opt# python3.7 -m venv zedenv-venv
root@caliban:~/opt# . zedenv-venv/bin/activate
(zedenv-venv) root@caliban:~/opt# cd pyzfscmds/
(zedenv-venv) root@caliban:~/opt/pyzfscmds# python setup.py install
(zedenv-venv) root@caliban:~/opt/pyzfscmds# cd ../zedenv
(zedenv-venv) root@caliban:~/opt/zedenv# python setup.py install</code></pre></div>
<p>Now <code>zedenv</code> should be installed into our new <code>zedenv-venv</code>:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">(zedenv-venv) root@caliban:~# zedenv --help
Usage: zedenv [OPTIONS] COMMAND [ARGS]...
ZFS boot environment manager cli
Options: --version
--plugins List available plugins. --help Show this message and exit.
Commands:
activate Activate a boot environment. create Create a boot environment.
destroy Destroy a boot environment or snapshot.
get Print boot environment properties. list List all boot environments.
mount Mount a boot environment temporarily.
rename Rename a boot environment. set Set boot environment properties.
umount Unmount a boot environment.</code></pre></div>
<p>As you can check <code>systemd-boot</code> seems to be supported out of the box:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">(zedenv-venv) root@caliban:~# zedenv --plugins
Available plugins:
systemdboot</code></pre></div>
<p>But since we are using Proxmox <code>systemd-boot</code> and <code>zedenv</code> are
actually not really supported. Remember that Proxmox doesn&rsquo;t
actually mount the EFI System Partitions? Well <code>zedenv</code> makes the
assumption that there is only one ESP, and that it is mounted
somewhere at all times.</p>
<p>Nonetheless, lets explore <code>zedenv</code> a bit so you can see how using a
boot environment manager looks like. Let&rsquo;s <code>list</code> the available boot
environments:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">(zedenv-venv) root@caliban:~# zedenv list
Name Active Mountpoint Creation
pve-1 NR / Mon-Aug-19-1:27-2019
(zedenv-venv) root@caliban:~# zfs list -r rpool/ROOT
NAME USED AVAIL REFER MOUNTPOINT
rpool/ROOT 1.17G 444G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.17G 444G 1.17G /</code></pre></div>
<p>Before we can create new boot environments, we have to outwit
<code>zedenv</code> on our Proxmox host: we have to set the bootloader to
systemd-boot and due to the assumption that the ESP has to be
mounted, we also have to make <code>zedenv</code> believe that the ESP is
mounted (<code>/tmp/efi</code> is a reasonably sane path for this since we
won&rsquo;t be really using <code>zedenv</code> to configure <code>systemd-boot</code> here):</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">zedenv set org.zedenv:bootloader=systemdboot
mkdir /tmp/efi
zedenv set org.zedenv.systemdboot:esp=/tmp/efi</code></pre></div>
<p>We can now <code>create</code> new boot environments:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">(zedenv-venv) root@caliban:~# zedenv create default-000
(zedenv-venv) root@caliban:~# zedenv list
Name Active Mountpoint Creation
pve-1 NR / Mon-Aug-19-1:27-2019
default-000 - Sun-Aug-25-19:44-2019
(zedenv-venv) root@caliban:~# zfs list -r rpool/ROOT
NAME USED AVAIL REFER MOUNTPOINT
rpool/ROOT 1.17G 444G 96K /rpool/ROOT
rpool/ROOT/default-000 8K 444G 1.17G /
rpool/ROOT/pve-1 1.17G 444G 1.17G /</code></pre></div>
<p>Notice the <code>NR</code>? This shows us that the <code>pve-1</code> boot environment is
now active (<code>N</code>) and after the next reboot the <code>pve-1</code> boot
environment will be active (<code>R</code>).</p>
<p>We also get information on the mountpoint of the boot environment
as well as the date, when the boot environment was created, so we
get a bit more information than only having the name of the boot
environment.</p>
<p>On a fully supported system we could now also <code>activate</code> the
<code>default-000</code> boot environment, that we&rsquo;ve just created and we
would then get an output similiar to this, showing us that
<code>default-000</code> would be active on the next
reboot</label><span class="marginnote"> <code>zedenv</code> can also <code>destroy</code>, <code>mount</code> and <code>unmount</code> boot environments as well as <code>get</code> and <code>set</code> some ZFS specific options, but right now what we want to focus on is how to get activation working with Proxmox. </span>:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">(zedenv-venv) root@caliban:~# zedenv activate default-000
(zedenv-venv) root@caliban:~# zedenv list
Name Active Mountpoint Creation
pve-1 N / Mon-Aug-19-1:27-2019
default-000 R - Sun-Aug-25-19:44-2019</code></pre></div>
<p>Since we are on Proxmox however, instead we&rsquo;ll get the following
error message:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">(zedenv-venv) root@caliban:~# zedenv activate default-000
WARNING: Running activate without a bootloader. Re-run with a default bootloader, or with the &#39;--bootloader/-b&#39; flag. If you plan to manually edit your bootloader config this message can safely be ignored.</code></pre></div>
<p>At this point you have seen how a typical boot environment manager
looks like and you now know what <code>create</code> and <code>activate</code> will usually
do.</p>
<h3 id="systemd-boot-and-the-efi-system-partitions"><code>systemd-boot</code> and the EFI System Partitions</h3>
<p>Next we&rsquo;ll take a closer look into the content of these EFI System
Partitions and the files systemd-boot is using to start our system
so lets take a look at what is stored on a ESP in Proxmox:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">(zedenv-venv) root@caliban:~# mount /dev/sda2 /boot/efi/
(zedenv-venv) root@caliban:~# tree /boot/efi
.
├── EFI
│ ├── BOOT
│ │ └── BOOTX64.EFI
│ ├── proxmox
│ │ ├── 5.0.15-1-pve
│ │ │ ├── initrd.img-5.0.15-1-pve
│ │ │ └── vmlinuz-5.0.15-1-pve
│ │ └── 5.0.18-1-pve
│ │ ├── initrd.img-5.0.18-1-pve
│ │ └── vmlinuz-5.0.18-1-pve
│ └── systemd
│ └── systemd-bootx64.efi
└── loader
├── entries
│ ├── proxmox-5.0.15-1-pve.conf
│ └── proxmox-5.0.18-1-pve.conf
└── loader.conf</code></pre></div>
<p>So we have the kernels and initrd in <code>EFI/proxmox</code> and some
configuration files in <code>loader/</code>.</p>
<p>The loader.conf file looks like this:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">(zedenv-venv) root@caliban:/boot/efi# cat loader/loader.conf
timeout 3
default proxmox-*</code></pre></div>
<p>We have a 3 second timeout in <code>systemd-boot</code> and the default boot
entry has to begin with the string <code>proxmox</code>. Nothing too
complicated here.</p>
<p>Apart from that, we have the <code>proxmox-5.X.X-pve.conf</code> files which we
already know from last time (they are what is generated by the
<code>/etc/kernel/postinst.d/zz-pve-efiboot</code> script). They look like
this:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">(zedenv-venv) root@caliban:/boot/efi# cat loader/entries/proxmox-5.0.18-1-pve.conf
title Proxmox Virtual Environment
version 5.0.18-1-pve
options ip=[...] cryptdevice=UUID=[...] cryptdevice=UUID=[...] root=ZFS=rpool/ROOT/pve-1 boot=zfs
linux /EFI/proxmox/5.0.18-1-pve/vmlinuz-5.0.18-1-pve
initrd /EFI/proxmox/5.0.18-1-pve/initrd.img-5.0.18-1-pve</code></pre></div>
<p>So basically they just point to the kernel and initrd in the
<code>EFI/proxmox</code> directory and start the kernel with the right <code>root</code>
option so that the correct boot environment is mounted.</p>
<p>At this point it makes sense to reiterate what a boot evironment
is. Up until now we have defined a boot environment loosely as a
file system snapshot we can boot into. At this point we have to
refine the &ldquo;we can boot into&rdquo; part of this definition: A Boot
environment is a filesystem snapshot together with the bootloader
configuration as well as the kernel and initrd files from the
moment the snapshot was taken.</p>
<p>The boot environment of <code>pve-1</code> consists specifically of the
following files from the ESP partition:</p>
<div class="highlight"><pre style="background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">.
├── EFI
│ ├── proxmox
│ │ ├── 5.0.15-1-pve
│ │ │ ├── initrd.img-5.0.15-1-pve
│ │ │ └── vmlinuz-5.0.15-1-pve
│ │ └── 5.0.18-1-pve
│ │ ├── initrd.img-5.0.18-1-pve
│ │ └── vmlinuz-5.0.18-1-pve
└── loader
└── entries
├── proxmox-5.0.15-1-pve.conf
└── proxmox-5.0.18-1-pve.conf</code></pre></div>
<p>If you head over to the part of the <a href="https://zedenv.readthedocs.io/en/latest/plugins.html#systemdboot">zedenv documentation on
systemd-boot</a>, you see that there the creation of an <code>/env</code> directory
that holds all of the boot environment specific files on the ESP
is proposed in that coupled with a bit of bind-mount magic tricks
the underlying system into always finding the right files inside
of <code>/boot</code>, when actually only the files that that belong to the
currently active boot environment are mounted.</p>
<p>This does not apply to our Proxmox situation, there is for example
no mounted ESP. Also the <code>pve-efiboot-tool</code> takes care of the kernel
versions that are available in the <code>EFI/proxmox/</code> directory so
unless they are marked as manually installed (which you can do in
Proxmox) some of the kernel versions will disappear at some point
in time rendering the boot environment incomplete.</p>
<h3 id="making-zedenv-and-proxmox-play-well-together">Making <code>zedenv</code> and Proxmox play well together</h3>
<p>I should probably point out here, that this part is more of a
proposition, of how this could work than necessarily a good
solution (it does work though). I&rsquo;m pretty new to Proxmox and not
at all an expert, when it comes to boot environments, so better
take everything here with a few grains of salt.</p>
<p>As we&rsquo;ve learned in the previous part, <code>zedenv</code> is pretty awesome,
but by design not exactly aimed at working with Proxmox. That
being said, <code>zedenv</code> is actually written with plugins in mind, I&rsquo;ve
skimmed the code and there is a bunch of pre- and post-hooking
going on, so I think it could be possible to just set up some sort
of Proxmox plugin for <code>zedenv</code>. Since I&rsquo;m not a <code>python</code> guy and
there&rsquo;s of course also the option to add support to this from the