Unterschiede

Hier werden die Unterschiede zwischen zwei Versionen angezeigt.

Link zu dieser Vergleichsansicht

Beide Seiten der vorigen Revision Vorhergehende Überarbeitung
Nächste Überarbeitung
Vorhergehende Überarbeitung
Nächste ÜberarbeitungBeide Seiten der Revision
content:serverbasics [2024/01/08 18:59] – [Raided EFI-BOOT] Danielcontent:serverbasics [2024/04/20 10:26] – [Raided EFI-BOOT] Daniel
Zeile 16: Zeile 16:
  
 Always use LVM, as this has many benefits. On OpenSuSE btrfs is the best Filesystem if you disable Quotas on datapartitions. Always use LVM, as this has many benefits. On OpenSuSE btrfs is the best Filesystem if you disable Quotas on datapartitions.
- 
  
 ==== Raided EFI-BOOT ==== ==== Raided EFI-BOOT ====
Zeile 22: Zeile 21:
 Nowadays, UEFI is always the best choice to boot. UEFI- Boot is quite straight forward: You first take some device, make it gpt- partitioned, create a partition (i would at least take 500 MB today, better 1GB in size), format that partition with FAT32 and mark the partition as efi-boot via the partition flag. Thats all. After some OS installed to that partition in a UEFI- way, the bios can load those files and start the OS. Nowadays, UEFI is always the best choice to boot. UEFI- Boot is quite straight forward: You first take some device, make it gpt- partitioned, create a partition (i would at least take 500 MB today, better 1GB in size), format that partition with FAT32 and mark the partition as efi-boot via the partition flag. Thats all. After some OS installed to that partition in a UEFI- way, the bios can load those files and start the OS.
  
-Unfortunatelly, the designers of UEFI forgot, that if your not using hardware- raid (which i don't recommend, as your losing the ability to switch harddisks between your hardware), there is no standard way to raid the partition as FAT32 is not suitable for that (way too old filesystem).+Unfortunatelly, the designers of UEFI forgot, that if your not using hardware- raid (which i don't recommend, as your losing the ability to switch harddisks between your hardware), there is no standard way to raid the partition as FAT32 is not suitable for that while it would overwrite the parts in the partition, that are needed by MD Raid1 to store its metadata.
  
-Fortunatelly the designers of OSS software- raid were smarter: They found a way to work around that.+Fortunatelly the designers of OSS software- raid were smarter: They found a way to work around that: They made a special Version of MD Metadata calle 1.0 which will store its Metadata at the end of the partition - so it will not interfere with FAT32. For FAT32 it can work as usual and for MD-Tools it will be able to detect the devices as Raid1.
  
-So I would suggest to use two disks both partioned with gpt and same sized efi-partitions and before creating the FAT32 filesystem do software raid on it. E.g.:+So I would suggest to use two disks both partioned with GPT and same sized efi-partitions (make them about 500 Megabytes in Size to store Bios or UCODE updates for Firmware Updater) and before creating the FAT32 filesystem do software raid on it. E.g.:
  
 <code> <code>
Zeile 35: Zeile 34:
 The important part is metadata=1.0 - this format has especially designed to fit the needs of raid1 of fat/efi- systems. The important part is metadata=1.0 - this format has especially designed to fit the needs of raid1 of fat/efi- systems.
  
-You than install your Linux to that md- Device. If its not found in the beginning of the installation, scan for raid- devices or just create it while installing.+You than install your Linux Bootmanager / EFIBOOT to that md- Device. If its not found in the beginning of the installation, scan for raid- devices or just create it while installing with the line above.
  
  
Zeile 41: Zeile 40:
  
 LVM is a powerful partition-management-layer and should always be used, when there is some none low-end hardware present. If you can use the **KDE Partitioning- Tool** (which means having Plasma=KDE Desktop compatible support), the support is very inuitive and opens a lot of flexibility whne handling partitions, like adding more disk space or moving partitions, but also on console this offers good functionality. OpenSuSE offer to create LVM- Styled system setup in installation optionally (not by default). If you can: use it. LVM is a powerful partition-management-layer and should always be used, when there is some none low-end hardware present. If you can use the **KDE Partitioning- Tool** (which means having Plasma=KDE Desktop compatible support), the support is very inuitive and opens a lot of flexibility whne handling partitions, like adding more disk space or moving partitions, but also on console this offers good functionality. OpenSuSE offer to create LVM- Styled system setup in installation optionally (not by default). If you can: use it.
-=== Raided LVM- Volumes ===+=== Mirror- Raided LVM- Volumes (RAID1) === 
 + 
 +Noadays, MD raid1 or raid5 for system without LVM is outdated. Those things are integrated in LVM - so use it!
  
-Noadaysraid1 or raid5 for system without LVM is outdatedThose things are integrated in LVM - so use it!+For our Setup we want to have the Linux Base System on Raid1because Raid1 offers the flexibility to only have one phisical device that will work for its own without configuring. If you want to have the system on one disk, this is really nice.
  
-Firstcreat volume group with two same size partitions on two discsthan create a raid1 on it (for example):+So firstcreate partition on both disks which is maximum in Size. Than, create a Volume group containing those partitions and finally, create a raid1 on it (for example):
  
 <code> <code>
 vgcreate vgsystem /dev/sdX1 /dev/sdY1 vgcreate vgsystem /dev/sdX1 /dev/sdY1
-lvcreate -m1 --type raid1 -l 100%FREE -n lvroot vgsystem+lvcreate -m1 --type raid1 -L 100GB -n lvroot vgsystem
  
 </code> </code>
Zeile 77: Zeile 78:
  
 where i equals the number of devices with Data (not including parity- storage) where i equals the number of devices with Data (not including parity- storage)
 +
  
 === Useful Commands === === Useful Commands ===
Zeile 89: Zeile 91:
  
 <code> <code>
-# lvs -o+devices +# lvs -P -a -o +devices,segtype 
-LV     VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices +  LV                  VG       Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                        Type 
-  home   system -wi-ao----  78.63g                                                     /dev/sdb2(8013+  lvbackup            vgdata   rwi-aor---    4.40t                                    100.00           lvbackup_rimage_0(0),lvbackup_rimage_1(0),lvbackup_rimage_2(0) raid5 
-  root   system -wi-ao----  97.89g                                                     /dev/sda4(0) +  [lvbackup_rimage_0] vgdata   iwi-aor---    2.20t                                                     /dev/sde1(377061)                                              linear 
-  shared system -wi-ao---- 786.64g                                                     /dev/sdb2(130893)+  [lvbackup_rimage_1] vgdata   iwi-aor---    2.20t                                                     /dev/sda1(377061                                             linear 
 +  [lvbackup_rimage_2] vgdata   iwi-aor---    2.20t                                                     /dev/sdd1(377061)                                              linear 
 +  [lvbackup_rmeta_0]  vgdata   ewi-aor---    4.00m                                                     /dev/sde1(377060)                                              linear 
 +  [lvbackup_rmeta_1]  vgdata   ewi-aor---    4.00m                                                     /dev/sda1(377060)                                              linear 
 +  [lvbackup_rmeta_2]  vgdata   ewi-aor---    4.00m                                                     /dev/sdd1(377060)                                              linear 
 +  lvdata              vgdata   rwi-aor--- 1007.30g                                    100.00           lvdata_rimage_0(0),lvdata_rimage_1(0),lvdata_rimage_2(0)       raid5 
 +  [lvdata_rimage_0]   vgdata   iwi-aor---  503.65g                                                     /dev/sde1(1)                                                   linear 
 +  [lvdata_rimage_1]   vgdata   iwi-aor---  503.65g                                                     /dev/sda1(1)                                                   linear 
 +  [lvdata_rimage_2]   vgdata   iwi-aor---  503.65g                                                     /dev/sdd1(1)                                                   linear 
 +  [lvdata_rmeta_0]    vgdata   ewi-aor---    4.00m                                                     /dev/sde1(0)                                                   linear 
 +  [lvdata_rmeta_1]    vgdata   ewi-aor---    4.00m                                                     /dev/sda1(0)                                                   linear 
 +  [lvdata_rmeta_2]    vgdata   ewi-aor---    4.00m                                                     /dev/sdd1(0)                                                   linear 
 +  lvdocker            vgdata   rwi-aor---    1.89t                                    100.00           lvdocker_rimage_0(0),lvdocker_rimage_1(0),lvdocker_rimage_2(0) raid5 
 +  [lvdocker_rimage_0] vgdata   iwi-aor---  969.23g                                                     /dev/sde1(128936)                                              linear 
 +  [lvdocker_rimage_1] vgdata   iwi-aor---  969.23g                                                     /dev/sda1(128936)                                              linear 
 +  [lvdocker_rimage_2] vgdata   iwi-aor---  969.23g                                                     /dev/sdd1(128936)                                              linear 
 +  [lvdocker_rmeta_0]  vgdata   ewi-aor---    4.00m                                                     /dev/sde1(128935)                                              linear 
 +  [lvdocker_rmeta_1]  vgdata   ewi-aor---    4.00m                                                     /dev/sda1(128935)                                              linear 
 +  [lvdocker_rmeta_2]  vgdata   ewi-aor---    4.00m                                                     /dev/sdd1(128935)                                              linear 
 +  lvhome              vgsystem rwi-aor---   94.93g                                    100.00           lvhome_rimage_0(0),lvhome_rimage_1(0)                          raid1 
 +  [lvhome_rimage_0]   vgsystem iwi-aor---   94.93g                                                     /dev/sdc2(166910)                                              linear 
 +  [lvhome_rimage_1]   vgsystem iwi-aor---   94.93g                                                     /dev/sdb2(166910)                                              linear 
 +  [lvhome_rmeta_0]    vgsystem ewi-aor---    4.00m                                                     /dev/sdc2(166909)                                              linear 
 +  [lvhome_rmeta_1]    vgsystem ewi-aor---    4.00m                                                     /dev/sdb2(166909)                                              linear 
 +  lvroot              vgsystem rwi-aor---   97.52g                                    100.00           lvroot_rimage_0(0),lvroot_rimage_1(0)                          raid1 
 +  [lvroot_rimage_0]   vgsystem iwi-aor---   97.52g                                                     /dev/sdc2(1)                                                   linear 
 +  [lvroot_rimage_1]   vgsystem iwi-aor---   97.52g                                                     /dev/sdb2(1)                                                   linear 
 +  [lvroot_rmeta_0]    vgsystem ewi-aor---    4.00m                                                     /dev/sdc2(0)                                                   linear 
 +  [lvroot_rmeta_1]    vgsystem ewi-aor---    4.00m                                                     /dev/sdb2(0                                                  linear
  
 </code> </code>
Zeile 103: Zeile 133:
  
 </code> </code>
 +
  
 == Resizing logical Volumes with mounted Filesystem == == Resizing logical Volumes with mounted Filesystem ==
Zeile 364: Zeile 395:
  
 That means you can only set the defaults per user or per group and only files or directories at once. That means you can only set the defaults per user or per group and only files or directories at once.
- 
  
 === FACL: use in batch and recursively === === FACL: use in batch and recursively ===
Zeile 370: Zeile 400:
 FACLs do also have good ways to be used for whole directories, chek out: FACLs do also have good ways to be used for whole directories, chek out:
  
-''setfacl'' has a //recursive// option (''-R'') just like ''chmod'':+''setfacl''  has a //recursive//  option (''-R'') just like ''chmod'':
  
 -R, –recursive Apply operations to all files and directories recursively. This option cannot be mixed with `–restore'. -R, –recursive Apply operations to all files and directories recursively. This option cannot be mixed with `–restore'.
Zeile 376: Zeile 406:
 === FACL: handle execute-bit with files and directories === === FACL: handle execute-bit with files and directories ===
  
-…it also allows for the use of the capital-x ''X'' **permission**, which means: execute only if the file is a directory or already has execute permission for some user (X)+…it also allows for the use of the capital-x ''X''  **permission**, which means: execute only if the file is a directory or already has execute permission for some user (X)
  
 so doing the following should work: so doing the following should work:
Zeile 387: Zeile 417:
  
 ''setfacl -R -m **d**:u:colleague:rwX .'' ''setfacl -R -m **d**:u:colleague:rwX .''
- 
  
 ==== Last words ==== ==== Last words ====
  • content/serverbasics.txt
  • Zuletzt geändert: 2024/04/20 13:02
  • von Daniel