Unterschiede

Hier werden die Unterschiede zwischen zwei Versionen angezeigt.

Link zu dieser Vergleichsansicht

Beide Seiten der vorigen Revision Vorhergehende Überarbeitung
Nächste Überarbeitung
Vorhergehende Überarbeitung
content:serverbasics [2023/07/24 12:30] – [Raided LVM- Volumes] Danielcontent:serverbasics [2024/04/20 13:02] (aktuell) – [UMask- Approach] Daniel
Zeile 2: Zeile 2:
  
 These setting here are an advice to think about when setting up a new linux- machine (here on an opensuse distrubution, which i really like). These setting here are an advice to think about when setting up a new linux- machine (here on an opensuse distrubution, which i really like).
 +
 +===== Subpages =====
 +
 +<catlist content:serverbasics -nohead -noNSInBold -sortAscending -sortByTitle -noAddPageButton -maxDepth:1>
  
 ===== Mountpoints ===== ===== Mountpoints =====
Zeile 7: Zeile 11:
 By default openSuSE will set some conservative mountoptions, that are save, but not best choice for homeoffice use and maybe could also improve company servers. Here are some proposals to think about. By default openSuSE will set some conservative mountoptions, that are save, but not best choice for homeoffice use and maybe could also improve company servers. Here are some proposals to think about.
  
-==== Raided EFIBOOT ====+Basically i would recommend to use UEFI only in Bios and GPT- Partitiontable on at least two Harddrives. The Linux- Root- System AND the EFI- Partitions should be mirrored (raid1) for failsafe and mak it possible to have the system booting from ONE disk (which is not possible with raid5).
  
-There are some problems when raiding the efi-boot. I would suggest to use:+The Data (like Home and program data) can have raid5 with 3 or more disks. 
 + 
 +Always use LVM, as this has many benefits. On OpenSuSE btrfs is the best Filesystem if you disable Quotas on datapartitions. 
 + 
 +==== Example-Setup ==== 
 + 
 +My small Homeoffice-Server described here, will have 5 Disks: 
 + 
 +  * 2x SSD with 2 TB each 
 +  * 3x HDD with 4 TB each 
 + 
 +My Setup will look like this: 
 + 
 +The SSDs will bothe have the same Layout: 
 + 
 +  * 1x 1GB Raid1 FAT32 EFIBOOT 
 +  * 1x 100%FREE LVM2 PV in Volumegroup vgssd 
 +      * 100GB Raid1 lvroot btrfs,compress=zstd:3 root 
 +      * 50GB Raid1 lvmariadb xfs for docker service mariadb 
 +      * Space left blank for other high performance- services or growth 
 + 
 +The HDDs will have: 
 + 
 +  * 1x 100%Free LVM2 PV Volumegroup vgdata 
 +      * 1x 100GB Raid5 xfs, home and docker-service 
 +      * 1x 4,4TB Raid5 lvbackup btrfs,compress=zstd:7 for internal daily Backup 
 + 
 +==== Raided EFI-BOOT ==== 
 + 
 +Nowadays, UEFI is always the best choice to boot. UEFI- Boot is quite straight forward: You first take some device, make it gpt- partitioned, create a partition (i would at least take 500 MB today, better 1GB in size), format that partition with FAT32 and mark the partition as efi-boot via the partition flag. Thats usually all for a small office system. After some OS installed to that partition in a UEFI- way, the bios can load those files and start the OS. 
 + 
 +But: Unfortunatelly, the designers of UEFI forgot, that if your not using hardware- raid (which i don't recommend, as your losing the ability to switch harddisks between your hardware), there is no standard way to raid the partition as FAT32 is not suitable for that while it would overwrite the parts in the partition, that are needed by MD Raid1 to store its metadata. 
 + 
 +Fortunatelly the designers of OSS software- raid were smarter: They found a way to work around that: They made a special Version of MD Metadata called V1.0 which will store its Metadata at the end of the partition - so it will not interfere with FAT32. For FAT32 it can work as usual and for MD-Tools it will be able to detect the devices as Raid1. 
 + 
 +But still - LVM will not work in this case. MD Partitions and Raid1 need to be outside of the LVM-Partition. 
 + 
 +So I would suggest to use two disks both partioned with GPT and same sized efi-partitions (as said, at least 500 Megabytes in Size to store Bios or UCODE updates for Firmware Updater) and before creating the FAT32 filesystem do software raid on it. E.g.:
  
 <code> <code>
Zeile 17: Zeile 58:
  
 The important part is metadata=1.0 - this format has especially designed to fit the needs of raid1 of fat/efi- systems. The important part is metadata=1.0 - this format has especially designed to fit the needs of raid1 of fat/efi- systems.
 +
 +You than install your Linux Bootmanager / EFIBOOT to that md- Device. If its not found in the beginning of the installation, scan for raid- devices or just create it while installing with the line above.
  
  
 ==== LVM ==== ==== LVM ====
  
-LVM is a powerful partition-management-layer and should always be used, when there is some none low-end hardware present. If you can use the **KDE Partitioning- Tool** (which means having Plasma=KDE Desktop compatible support), the support is very inuitive and opens a lot of flexibility whne handling partitions, like adding more disk space or moving partitions, but also on console this offers good functionality. OpenSuSE offer to create LVM- Styled system setup in installation optionally (not by default). If you can: use it. +LVM is a powerful partition-management-layer and should always be used, when there is some none low-end hardware present. If you can use the **KDE Partitioning- Tool**  (which means having Plasma=KDE Desktop compatible support), the support is very inuitive and opens a lot of flexibility whne handling partitions, like adding more disk space or moving partitions, but also on console this offers good functionality. OpenSuSE offer to create LVM- Styled system setup in installation optionally (not by default). If you can: use it. 
-=== Raided LVM- Volumes ===+ 
 +=== Mirror- Raided LVM- Volumes (RAID1) === 
 + 
 +Noadays, MD raid1 or raid5 for system without LVM is outdated. Those things are integrated in LVM - so use it!
  
-Noadaysraid1 or raid5 for system without LVM is outdatedThose things are integrated in LVM - so use it!+For our Setup we want to have the Linux Base System on Raid1because Raid1 offers the flexibility to only have one phisical device that will work for its own without configuring. If you want to have the system on one disk, this is really nice.
  
-Firstcreat volume group with two same size partitions on two discsthan create a raid1 on it (for example):+So firstcreate partition on both disks which is maximum in Size. Than, create a Volume group containing those partitions and finally, create a raid1 on it (for example):
  
 <code> <code>
 vgcreate vgsystem /dev/sdX1 /dev/sdY1 vgcreate vgsystem /dev/sdX1 /dev/sdY1
-lvcreate -m1 --type raid1 -l 100%FREE -n lvroot vgsystem+lvcreate -m1 --type raid1 -L 100GB -n lvroot vgsystem
  
 </code> </code>
Zeile 58: Zeile 104:
  
 where i equals the number of devices with Data (not including parity- storage) where i equals the number of devices with Data (not including parity- storage)
- 
  
 === Useful Commands === === Useful Commands ===
Zeile 71: Zeile 116:
  
 <code> <code>
-# lvs -o+devices +# lvs -P -a -o +devices,segtype 
-LV     VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices +  LV                  VG       Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                        Type 
-  home   system -wi-ao----  78.63g                                                     /dev/sdb2(8013+  lvbackup            vgdata   rwi-aor---    4.40t                                    100.00           lvbackup_rimage_0(0),lvbackup_rimage_1(0),lvbackup_rimage_2(0) raid5 
-  root   system -wi-ao----  97.89g                                                     /dev/sda4(0) +  [lvbackup_rimage_0] vgdata   iwi-aor---    2.20t                                                     /dev/sde1(377061)                                              linear 
-  shared system -wi-ao---- 786.64g                                                     /dev/sdb2(130893)+  [lvbackup_rimage_1] vgdata   iwi-aor---    2.20t                                                     /dev/sda1(377061                                             linear 
 +  [lvbackup_rimage_2] vgdata   iwi-aor---    2.20t                                                     /dev/sdd1(377061)                                              linear 
 +  [lvbackup_rmeta_0]  vgdata   ewi-aor---    4.00m                                                     /dev/sde1(377060)                                              linear 
 +  [lvbackup_rmeta_1]  vgdata   ewi-aor---    4.00m                                                     /dev/sda1(377060)                                              linear 
 +  [lvbackup_rmeta_2]  vgdata   ewi-aor---    4.00m                                                     /dev/sdd1(377060)                                              linear 
 +  lvdata              vgdata   rwi-aor--- 1007.30g                                    100.00           lvdata_rimage_0(0),lvdata_rimage_1(0),lvdata_rimage_2(0)       raid5 
 +  [lvdata_rimage_0]   vgdata   iwi-aor---  503.65g                                                     /dev/sde1(1)                                                   linear 
 +  [lvdata_rimage_1]   vgdata   iwi-aor---  503.65g                                                     /dev/sda1(1)                                                   linear 
 +  [lvdata_rimage_2]   vgdata   iwi-aor---  503.65g                                                     /dev/sdd1(1)                                                   linear 
 +  [lvdata_rmeta_0]    vgdata   ewi-aor---    4.00m                                                     /dev/sde1(0)                                                   linear 
 +  [lvdata_rmeta_1]    vgdata   ewi-aor---    4.00m                                                     /dev/sda1(0)                                                   linear 
 +  [lvdata_rmeta_2]    vgdata   ewi-aor---    4.00m                                                     /dev/sdd1(0)                                                   linear 
 +  lvdocker            vgdata   rwi-aor---    1.89t                                    100.00           lvdocker_rimage_0(0),lvdocker_rimage_1(0),lvdocker_rimage_2(0) raid5 
 +  [lvdocker_rimage_0] vgdata   iwi-aor---  969.23g                                                     /dev/sde1(128936)                                              linear 
 +  [lvdocker_rimage_1] vgdata   iwi-aor---  969.23g                                                     /dev/sda1(128936)                                              linear 
 +  [lvdocker_rimage_2] vgdata   iwi-aor---  969.23g                                                     /dev/sdd1(128936)                                              linear 
 +  [lvdocker_rmeta_0]  vgdata   ewi-aor---    4.00m                                                     /dev/sde1(128935)                                              linear 
 +  [lvdocker_rmeta_1]  vgdata   ewi-aor---    4.00m                                                     /dev/sda1(128935)                                              linear 
 +  [lvdocker_rmeta_2]  vgdata   ewi-aor---    4.00m                                                     /dev/sdd1(128935)                                              linear 
 +  lvhome              vgsystem rwi-aor---   94.93g                                    100.00           lvhome_rimage_0(0),lvhome_rimage_1(0)                          raid1 
 +  [lvhome_rimage_0]   vgsystem iwi-aor---   94.93g                                                     /dev/sdc2(166910)                                              linear 
 +  [lvhome_rimage_1]   vgsystem iwi-aor---   94.93g                                                     /dev/sdb2(166910)                                              linear 
 +  [lvhome_rmeta_0]    vgsystem ewi-aor---    4.00m                                                     /dev/sdc2(166909)                                              linear 
 +  [lvhome_rmeta_1]    vgsystem ewi-aor---    4.00m                                                     /dev/sdb2(166909)                                              linear 
 +  lvroot              vgsystem rwi-aor---   97.52g                                    100.00           lvroot_rimage_0(0),lvroot_rimage_1(0)                          raid1 
 +  [lvroot_rimage_0]   vgsystem iwi-aor---   97.52g                                                     /dev/sdc2(1)                                                   linear 
 +  [lvroot_rimage_1]   vgsystem iwi-aor---   97.52g                                                     /dev/sdb2(1)                                                   linear 
 +  [lvroot_rmeta_0]    vgsystem ewi-aor---    4.00m                                                     /dev/sdc2(0)                                                   linear 
 +  [lvroot_rmeta_1]    vgsystem ewi-aor---    4.00m                                                     /dev/sdb2(0                                                  linear
  
 </code> </code>
Zeile 86: Zeile 159:
 </code> </code>
  
 +== Resizing logical Volumes with mounted Filesystem ==
 +
 +can be done by e.g.
 +
 +<code>
 +lvresize --size 20G /dev/vgfast/lvfast --resizefs
 +
 +</code>
  
 ==== Filesystem ==== ==== Filesystem ====
  
-Brtfs is the way to go everywhere. There are some disadvanteges while it is still in developement and sometime a bit oversized for homeoffice, but no other filesystem is that good in general usage. Only use other Filesystems, if there are reasons for - e.g. when exchanging files with another windows on that pc.+Brtfs is the way to go everywhere where you need big data and flexibility. There are some disadvanteges while it is still in developement and sometimes it is a bit oversized for homeoffice, but no other filesystem is that good in general usage. Only use other Filesystems, if there are reasons for - e.g. when exchanging files with another windows on that pc. 
 + 
 +And there is one Reason: Docker - at the current time of writing this (20.04.2024) you should NOT USE BTRFS with Docker. More is explained later. 
  
 ==== Mountoptions ==== ==== Mountoptions ====
Zeile 99: Zeile 183:
 While autodefrag should not be necessary on ssd- harddiscs. While autodefrag should not be necessary on ssd- harddiscs.
  
-For **Databases** or files that need speed and __**are well backed up otherwise**__  : nodatacow,nodatasum,noatime,nodiratime +For **Databases**  or files that need speed and __**are well backed up otherwise**__  : nodatacow,nodatasum,noatime,nodiratime
  
 === Sources: === === Sources: ===
Zeile 221: Zeile 304:
 By default the umask is 0002 or 0022. Those values are substracted from 0777, which would mean full access for everyone. You can check out the docs in the net how they work. I won't explain here, cause there is a big problem with umask: The value can only be changed on process level or user or systemwide. This means you cannot set them per directory - which would be intentional to the user. By default the umask is 0002 or 0022. Those values are substracted from 0777, which would mean full access for everyone. You can check out the docs in the net how they work. I won't explain here, cause there is a big problem with umask: The value can only be changed on process level or user or systemwide. This means you cannot set them per directory - which would be intentional to the user.
  
-So forget about umask.+So you should maybe think of setting a better umask than 022 - which would make all users of you group have read access to you files to lets say 077. Or - even better don't use the group "users", but make a group with the same name as the user per User itself. Than you can have umask 007. 
 + 
 +On my system the umask can be defined in the file ''/etc/login.defs''
 + 
 +But to go on directory- permissions: forget about umask. 
  
 ==== FACLs ==== ==== FACLs ====
Zeile 227: Zeile 315:
 F… what??? Yes: facl is the tool to do so. with that tool you can very much expand the rights per directory an on every file in detail. It ist also possible to have multiple group- access definitions, which are not possible otherwise. F… what??? Yes: facl is the tool to do so. with that tool you can very much expand the rights per directory an on every file in detail. It ist also possible to have multiple group- access definitions, which are not possible otherwise.
  
-So lets do some facl- work:+So lets do some facl- work 
 + 
 +=== FACLget infos about settings ===
  
 <code> <code>
Zeile 242: Zeile 332:
  
 As you can see, that directory has been created quite insecure. There is only the above permission preventing everyone to read the informations in it. Creating a new file in it, would make it the same way insecure, as it would have been before. As you can see, that directory has been created quite insecure. There is only the above permission preventing everyone to read the informations in it. Creating a new file in it, would make it the same way insecure, as it would have been before.
 +
 +=== FACL: set default permissions ===
  
 But now lets set the mode to better fit our needs: But now lets set the mode to better fit our needs:
Zeile 261: Zeile 353:
  
 Note, that we only changed the DEFAULT permissions to be more secure (d:). Note, that we only changed the DEFAULT permissions to be more secure (d:).
 +
 +=== FACL: check new settings ===
  
 Now lets again create a file there as we did before just in that - safe - directory. Also we can use getfacl on that file to check if it works: Now lets again create a file there as we did before just in that - safe - directory. Also we can use getfacl on that file to check if it works:
Zeile 313: Zeile 407:
  
 Its up to you to decide if this is more usable or not. Its up to you to decide if this is more usable or not.
 +
 +=== FACL: full ACL- Sytnax ===
 +
 +The full Syntax is:
 +
 +<code>
 +      [d[efault]:] [u[ser]:]uid [:perms]
 +             Permissions of a named user. Permissions of the file owner if uid is empty.
 +
 +      [d[efault]:] g[roup]:gid [:perms]
 +             Permissions of a named group. Permissions of the owning group if gid is empty.
 +
 +      [d[efault]:] m[ask][:] [:perms]
 +             Effective rights mask
 +
 +      [d[efault]:] o[ther][:] [:perms]
 +             Permissions of others.
 +
 +</code>
 +
 +That means you can only set the defaults per user or per group and only files or directories at once.
 +
 +=== FACL: use in batch and recursively ===
 +
 +FACLs do also have good ways to be used for whole directories, chek out:
 +
 +''setfacl''  has a //recursive//  option (''-R'') just like ''chmod'':
 +
 +-R, –recursive Apply operations to all files and directories recursively. This option cannot be mixed with `–restore'.
 +
 +=== FACL: handle execute-bit with files and directories ===
 +
 +…it also allows for the use of the capital-x ''X''  **permission**, which means: execute only if the file is a directory or already has execute permission for some user (X)
 +
 +so doing the following should work:
 +
 +Set all Files AND the directories recursively to be readwriteable by user colleague and only give X to all Directories and only those Files, that already have x set:
 +
 +''setfacl -R -m u:colleague:rwX .''
 +
 +For setting the default permissions to be like that:
 +
 +''setfacl -R -m **d**:u:colleague:rwX .''
  
 ==== Last words ==== ==== Last words ====
  • content/serverbasics.1690201814.txt.gz
  • Zuletzt geändert: 2023/07/24 12:30
  • von Daniel