Unterschiede
Hier werden die Unterschiede zwischen zwei Versionen angezeigt.
Beide Seiten der vorigen Revision Vorhergehende Überarbeitung Nächste Überarbeitung | Vorhergehende Überarbeitung | ||
content:serverbasics [2024/04/20 10:00] – alte Version wiederhergestellt (2024/01/08 18:59) Daniel | content:serverbasics [2024/04/20 13:02] (aktuell) – [UMask- Approach] Daniel | ||
---|---|---|---|
Zeile 16: | Zeile 16: | ||
Always use LVM, as this has many benefits. On OpenSuSE btrfs is the best Filesystem if you disable Quotas on datapartitions. | Always use LVM, as this has many benefits. On OpenSuSE btrfs is the best Filesystem if you disable Quotas on datapartitions. | ||
+ | |||
+ | ==== Example-Setup ==== | ||
+ | |||
+ | My small Homeoffice-Server described here, will have 5 Disks: | ||
+ | |||
+ | * 2x SSD with 2 TB each | ||
+ | * 3x HDD with 4 TB each | ||
+ | |||
+ | My Setup will look like this: | ||
+ | |||
+ | The SSDs will bothe have the same Layout: | ||
+ | |||
+ | * 1x 1GB Raid1 FAT32 EFIBOOT | ||
+ | * 1x 100%FREE LVM2 PV in Volumegroup vgssd | ||
+ | * 100GB Raid1 lvroot btrfs, | ||
+ | * 50GB Raid1 lvmariadb xfs for docker service mariadb | ||
+ | * Space left blank for other high performance- services or growth | ||
+ | |||
+ | The HDDs will have: | ||
+ | |||
+ | * 1x 100%Free LVM2 PV Volumegroup vgdata | ||
+ | * 1x 100GB Raid5 xfs, home and docker-service | ||
+ | * 1x 4,4TB Raid5 lvbackup btrfs, | ||
==== Raided EFI-BOOT ==== | ==== Raided EFI-BOOT ==== | ||
- | Nowadays, UEFI is always the best choice to boot. UEFI- Boot is quite straight forward: You first take some device, make it gpt- partitioned, | + | Nowadays, UEFI is always the best choice to boot. UEFI- Boot is quite straight forward: You first take some device, make it gpt- partitioned, |
- | Unfortunatelly, | + | But: Unfortunatelly, |
- | Fortunatelly the designers of OSS software- raid were smarter: They found a way to work around that. | + | Fortunatelly the designers of OSS software- raid were smarter: They found a way to work around that: They made a special Version of MD Metadata called V1.0 which will store its Metadata at the end of the partition - so it will not interfere with FAT32. For FAT32 it can work as usual and for MD-Tools it will be able to detect the devices as Raid1. |
- | So I would suggest to use two disks both partioned with gpt and same sized efi-partitions and before creating the FAT32 filesystem do software raid on it. E.g.: | + | But still - LVM will not work in this case. MD Partitions and Raid1 need to be outside of the LVM-Partition. |
+ | |||
+ | So I would suggest to use two disks both partioned with GPT and same sized efi-partitions | ||
< | < | ||
Zeile 34: | Zeile 59: | ||
The important part is metadata=1.0 - this format has especially designed to fit the needs of raid1 of fat/efi- systems. | The important part is metadata=1.0 - this format has especially designed to fit the needs of raid1 of fat/efi- systems. | ||
- | You than install your Linux to that md- Device. If its not found in the beginning of the installation, | + | You than install your Linux Bootmanager / EFIBOOT |
==== LVM ==== | ==== LVM ==== | ||
- | LVM is a powerful partition-management-layer and should always be used, when there is some none low-end hardware present. If you can use the **KDE Partitioning- Tool** (which means having Plasma=KDE Desktop compatible support), the support is very inuitive and opens a lot of flexibility whne handling partitions, like adding more disk space or moving partitions, but also on console this offers good functionality. OpenSuSE offer to create LVM- Styled system setup in installation optionally (not by default). If you can: use it. | + | LVM is a powerful partition-management-layer and should always be used, when there is some none low-end hardware present. If you can use the **KDE Partitioning- Tool** |
- | === Raided LVM- Volumes === | + | |
+ | === Mirror- | ||
+ | |||
+ | Noadays, MD raid1 or raid5 for system without LVM is outdated. Those things are integrated in LVM - so use it! | ||
- | Noadays, raid1 or raid5 for system | + | For our Setup we want to have the Linux Base System on Raid1, because Raid1 offers the flexibility to only have one phisical device that will work for its own without |
- | First, creat a volume | + | So first, create |
< | < | ||
vgcreate vgsystem /dev/sdX1 /dev/sdY1 | vgcreate vgsystem /dev/sdX1 /dev/sdY1 | ||
- | lvcreate -m1 --type raid1 -l 100%FREE | + | lvcreate -m1 --type raid1 -L 100GB -n lvroot vgsystem |
</ | </ | ||
Zeile 87: | Zeile 116: | ||
< | < | ||
- | # lvs -P -a -o +devices | + | # lvs -P -a -o +devices,segtype |
- | LV VG | + | LV VG |
- | lvbackup | + | lvbackup |
- | [lvbackup_rimage_0] vgdata | + | [lvbackup_rimage_0] vgdata |
- | [lvbackup_rimage_1] vgdata | + | [lvbackup_rimage_1] vgdata |
- | [lvbackup_rimage_2] vgdata | + | [lvbackup_rimage_2] vgdata |
- | [lvbackup_rmeta_0] | + | [lvbackup_rmeta_0] |
- | [lvbackup_rmeta_1] | + | [lvbackup_rmeta_1] |
- | [lvbackup_rmeta_2] | + | [lvbackup_rmeta_2] |
- | lvdata | + | lvdata |
- | [lvdata_rimage_0] | + | [lvdata_rimage_0] |
- | [lvdata_rimage_1] | + | [lvdata_rimage_1] |
- | [lvdata_rimage_2] | + | [lvdata_rimage_2] |
- | [lvdata_rmeta_0] | + | [lvdata_rmeta_0] |
- | [lvdata_rmeta_1] | + | [lvdata_rmeta_1] |
- | [lvdata_rmeta_2] | + | [lvdata_rmeta_2] |
- | lvdocker | + | lvdocker |
- | [lvdocker_rimage_0] vgdata | + | [lvdocker_rimage_0] vgdata |
- | [lvdocker_rimage_1] vgdata | + | [lvdocker_rimage_1] vgdata |
- | [lvdocker_rimage_2] vgdata | + | [lvdocker_rimage_2] vgdata |
- | [lvdocker_rmeta_0] | + | [lvdocker_rmeta_0] |
- | [lvdocker_rmeta_1] | + | [lvdocker_rmeta_1] |
- | [lvdocker_rmeta_2] | + | [lvdocker_rmeta_2] |
- | lvhome | + | lvhome |
- | [lvhome_rimage_0] | + | [lvhome_rimage_0] |
- | [lvhome_rimage_1] | + | [lvhome_rimage_1] |
- | [lvhome_rmeta_0] | + | [lvhome_rmeta_0] |
- | [lvhome_rmeta_1] | + | [lvhome_rmeta_1] |
- | lvroot | + | lvroot |
- | [lvroot_rimage_0] | + | [lvroot_rimage_0] |
- | [lvroot_rimage_1] | + | [lvroot_rimage_1] |
- | [lvroot_rmeta_0] | + | [lvroot_rmeta_0] |
- | [lvroot_rmeta_1] | + | [lvroot_rmeta_1] |
</ | </ | ||
Zeile 141: | Zeile 170: | ||
==== Filesystem ==== | ==== Filesystem ==== | ||
- | Brtfs is the way to go everywhere. There are some disadvanteges while it is still in developement and sometime | + | Brtfs is the way to go everywhere |
+ | |||
+ | And there is one Reason: Docker - at the current time of writing this (20.04.2024) you should NOT USE BTRFS with Docker. More is explained later. | ||
==== Mountoptions ==== | ==== Mountoptions ==== | ||
Zeile 151: | Zeile 183: | ||
While autodefrag should not be necessary on ssd- harddiscs. | While autodefrag should not be necessary on ssd- harddiscs. | ||
- | For **Databases** or files that need speed and __**are well backed up otherwise**__ | + | For **Databases** |
=== Sources: === | === Sources: === | ||
Zeile 271: | Zeile 304: | ||
By default the umask is 0002 or 0022. Those values are substracted from 0777, which would mean full access for everyone. You can check out the docs in the net how they work. I won't explain here, cause there is a big problem with umask: The value can only be changed on process level or user or systemwide. This means you cannot set them per directory - which would be intentional to the user. | By default the umask is 0002 or 0022. Those values are substracted from 0777, which would mean full access for everyone. You can check out the docs in the net how they work. I won't explain here, cause there is a big problem with umask: The value can only be changed on process level or user or systemwide. This means you cannot set them per directory - which would be intentional to the user. | ||
- | So forget about umask. | + | So you should maybe think of setting a better umask than 022 - which would make all users of you group have read access to you files to lets say 077. Or - even better don't use the group " |
+ | |||
+ | On my system the umask can be defined in the file ''/ | ||
+ | |||
+ | But to go on directory- permissions: | ||
==== FACLs ==== | ==== FACLs ==== |