Unterschiede
Hier werden die Unterschiede zwischen zwei Versionen angezeigt.
Beide Seiten der vorigen Revision Vorhergehende Überarbeitung Nächste Überarbeitung | Vorhergehende Überarbeitung | ||
content:serverbasics [2024/01/03 23:05] – [Mountpoints] Daniel | content:serverbasics [2025/02/11 07:43] (aktuell) – [Which Usecase] Daniel | ||
---|---|---|---|
Zeile 1: | Zeile 1: | ||
- | ====== Linux: | + | ====== Linux: |
+ | |||
+ | Welcome to my **Advanced Server Setup- Documentation**. | ||
+ | |||
+ | In these chapters, i will explain how to setup and configure a full featured Active Domain- Network with Kerberos Single-Sign-On and Domain Integration of Linux Clients on a rootless containerized Docker- Installation including Nextcloud as personal Cloud to store all your Data and PIM locally and safe. That way you get a fully managed, Cloud enabled Homeoffice Network at low costs and much space for your personal data on your own pc. | ||
+ | |||
+ | |||
+ | ===== Current State ===== | ||
+ | |||
+ | This Document is currently under developement and chapters are not final right now. This will change in the Future. | ||
+ | |||
+ | ===== Usecase ===== | ||
+ | |||
+ | This is not a slim Setup - so if you only have old hardware or you are trying to figure out on yoru small office-pc, this may not work as well as you need it. | ||
+ | |||
+ | You should have at least | ||
+ | |||
+ | * Large Harddrives: If you have maybe 1.5 TB of Data all togehter, you will need: | ||
+ | * 3 TB of space on your working directory / raid5 = 3 Harddrives, each 1 TB at least | ||
+ | * 6 TB of space on your backup / raid5 = 3 Harddrives, each 2 TB at least | ||
+ | * about maybe 100GB for the system / raid1 = 2 Harddrives | ||
+ | * about maybe 100GB for the databases / raid1 = 2 Harddrives | ||
+ | * maybe two extra drives for external backups, each 6 TB (you can also store that in the internet, but you will need a large space there too) | ||
+ | * A Server, that has relyable, quite fast internet in Download and Upload rates - while Upload may be more Importen | ||
+ | * The Server should be reachable all the time | ||
+ | |||
+ | |||
+ | ===== How to Start ===== | ||
+ | |||
+ | First, read this Page, get the Hardware and install the system. You should understand the Hardwaresetup and the installation of Linux and Raid- Systems first (as decribend beneath). | ||
+ | |||
+ | Then, go on whith [[.: | ||
+ | |||
+ | Next, setup docker as decribed in the Chapter. When you have portainer running, you can go like this: | ||
+ | |||
+ | - Nextcloud-AIO | ||
+ | - FreeIPA | ||
+ | - Authentik | ||
+ | |||
+ | Then glue them together with SSO, SPNEGO and Nextcloud-SSO. Then you should have understood everything, you can now play around on your own. | ||
- | These setting here are an advice to think about when setting up a new linux- machine (here on an opensuse distrubution, | ||
===== Subpages ===== | ===== Subpages ===== | ||
<catlist content: | <catlist content: | ||
+ | |||
+ | ===== Basic System ===== | ||
+ | |||
+ | As Hardware, you should have at least: | ||
+ | |||
+ | * a single standard Desktop- PC with 4 or more Cores | ||
+ | * equipped with at least 16 GB of RAM and | ||
+ | * for failure of Discs a swappable mounting Rack to contain at least 5 Discs (should not have Raid as Hardware, as Software Raid in Linux is much more efficient!) | ||
+ | * Additional at least one external Disk, you may use to copy your Backups to and store them on a different physikal location | ||
===== Mountpoints ===== | ===== Mountpoints ===== | ||
Zeile 17: | Zeile 64: | ||
Always use LVM, as this has many benefits. On OpenSuSE btrfs is the best Filesystem if you disable Quotas on datapartitions. | Always use LVM, as this has many benefits. On OpenSuSE btrfs is the best Filesystem if you disable Quotas on datapartitions. | ||
+ | ==== Example-Setup ==== | ||
- | ==== Raided EFIBOOT ==== | + | My small Homeoffice-Server described here, will have 5 Disks: |
- | There are some problems when raiding | + | * 2x SSD with 2 TB each |
+ | * 3x HDD with 4 TB each | ||
+ | |||
+ | My Setup will look like this: | ||
+ | |||
+ | The SSDs will bothe have the same Layout: | ||
+ | |||
+ | * 1x 1GB Raid1 FAT32 EFIBOOT | ||
+ | * 1x 100%FREE LVM2 PV in Volumegroup vgssd | ||
+ | * 100GB Raid1 lvroot btrfs, | ||
+ | * 50GB Raid1 lvmariadb xfs for docker service mariadb | ||
+ | * Space left blank for other high performance- services or growth | ||
+ | |||
+ | The HDDs will have: | ||
+ | |||
+ | * 1x 100%Free LVM2 PV Volumegroup vgdata | ||
+ | * 1x 100GB Raid5 xfs, home and docker-service | ||
+ | * 1x 4,4TB Raid5 lvbackup btrfs, | ||
+ | |||
+ | ==== Raided EFI-BOOT ==== | ||
+ | |||
+ | Nowadays, UEFI is always the best choice to boot. UEFI- Boot is quite straight forward: You first take some device, make it gpt- partitioned, | ||
+ | |||
+ | But: Unfortunatelly, | ||
+ | |||
+ | Fortunatelly the designers of OSS software- raid were smarter: They found a way to work around that: They made a special Version of MD Metadata called V1.0 which will store its Metadata at the end of the partition - so it will not interfere with FAT32. For FAT32 it can work as usual and for MD-Tools it will be able to detect the devices as Raid1. | ||
+ | |||
+ | But still - LVM will not work in this case. MD Partitions and Raid1 need to be outside of the LVM-Partition. | ||
+ | |||
+ | So I would suggest to use two disks both partioned with GPT and same sized efi-partitions (as said, at least 500 Megabytes in Size to store Bios or UCODE updates for Firmware Updater) and before creating the FAT32 filesystem do software raid on it. E.g.: | ||
< | < | ||
Zeile 28: | Zeile 105: | ||
The important part is metadata=1.0 - this format has especially designed to fit the needs of raid1 of fat/efi- systems. | The important part is metadata=1.0 - this format has especially designed to fit the needs of raid1 of fat/efi- systems. | ||
+ | |||
+ | You than install your Linux Bootmanager / EFIBOOT to that md- Device. If its not found in the beginning of the installation, | ||
+ | |||
+ | === Recover faulty Disc === | ||
+ | |||
+ | If some Raid- Disc becomes faulty, it will show up like this (its for raid5, but raid1 will look alkie): | ||
+ | |||
+ | < | ||
+ | obel1x:~ # mdadm -D /dev/md126 | ||
+ | /dev/md126: | ||
+ | Version : 1.0 | ||
+ | Creation Time : Fri Apr 10 11:44:19 2020 | ||
+ | Raid Level : raid5 | ||
+ | Array Size : 1460286976 (1392.64 GiB 1495.33 GB) | ||
+ | Used Dev Size : 730143488 (696.32 GiB 747.67 GB) | ||
+ | Raid Devices : 3 | ||
+ | Total Devices : 2 | ||
+ | Persistence : Superblock is persistent | ||
+ | |||
+ | Intent Bitmap : Internal | ||
+ | |||
+ | Update Time : Sat Oct 26 14:26:37 2024 | ||
+ | State : clean, degraded | ||
+ | | ||
+ | Working Devices : 2 | ||
+ | | ||
+ | Spare Devices : 0 | ||
+ | |||
+ | | ||
+ | Chunk Size : 128K | ||
+ | |||
+ | Consistency Policy : bitmap | ||
+ | |||
+ | Name : any: | ||
+ | UUID : 6542dc7c: | ||
+ | | ||
+ | |||
+ | | ||
+ | 0 | ||
+ | 1 | ||
+ | - | ||
+ | |||
+ | </ | ||
+ | |||
+ | Maybe instead of removed you can see some entry like faulty instead of removed - this is, when the array had just failed. | ||
+ | |||
+ | To add a new device, you need an empty partiotion with at least the expected size (here 696 GB would be enough): | ||
+ | |||
+ | < | ||
+ | obel1x:~ # fdisk -l /dev/sdc | ||
+ | Disk /dev/sdc: 698.64 GiB, 750156374016 bytes, 1465149168 sectors | ||
+ | Disk model: WDC WD7500AAVS-0 | ||
+ | Units: sectors of 1 * 512 = 512 bytes | ||
+ | Sector size (logical/ | ||
+ | I/O size (minimum/ | ||
+ | Disklabel type: gpt | ||
+ | Disk identifier: 699DC7F4-D344-4447-8C5B-1F98E017A12B | ||
+ | |||
+ | Device | ||
+ | / | ||
+ | |||
+ | </ | ||
+ | |||
+ | That Partition should have the Type Linx Raid. If you don't have that, create it with partition- tool of kde or what you want. | ||
+ | |||
+ | Now you can simply add the device to the raid and it will begin to work: | ||
+ | |||
+ | < | ||
+ | obel1x:~ # mdadm /dev/md126 --add /dev/sdc1 | ||
+ | mdadm: re-added /dev/sdc1 | ||
+ | |||
+ | obel1x:~ # mdadm -D /dev/md126 | ||
+ | /dev/md126: | ||
+ | Version : 1.0 | ||
+ | Creation Time : Fri Apr 10 11:44:19 2020 | ||
+ | Raid Level : raid5 | ||
+ | Array Size : 1460286976 (1392.64 GiB 1495.33 GB) | ||
+ | Used Dev Size : 730143488 (696.32 GiB 747.67 GB) | ||
+ | Raid Devices : 3 | ||
+ | Total Devices : 3 | ||
+ | Persistence : Superblock is persistent | ||
+ | |||
+ | Intent Bitmap : Internal | ||
+ | |||
+ | Update Time : Sat Oct 26 14:34:57 2024 | ||
+ | State : clean, degraded, recovering | ||
+ | | ||
+ | Working Devices : 3 | ||
+ | | ||
+ | Spare Devices : 1 | ||
+ | |||
+ | | ||
+ | Chunk Size : 128K | ||
+ | |||
+ | Consistency Policy : bitmap | ||
+ | |||
+ | | ||
+ | |||
+ | Name : any: | ||
+ | UUID : 6542dc7c: | ||
+ | | ||
+ | |||
+ | | ||
+ | 0 | ||
+ | 1 | ||
+ | 3 | ||
+ | |||
+ | </ | ||
==== LVM ==== | ==== LVM ==== | ||
- | LVM is a powerful partition-management-layer and should always be used, when there is some none low-end hardware present. If you can use the **KDE Partitioning- Tool** (which means having Plasma=KDE Desktop compatible support), the support is very inuitive and opens a lot of flexibility whne handling partitions, like adding more disk space or moving partitions, but also on console this offers good functionality. OpenSuSE offer to create LVM- Styled system setup in installation optionally (not by default). If you can: use it. | + | LVM is a powerful partition-management-layer and should always be used, when there is some none low-end hardware present. If you can use the **KDE Partitioning- Tool** |
- | === Raided LVM- Volumes === | + | |
+ | === Mirror- | ||
+ | |||
+ | Noadays, MD raid1 or raid5 for system without LVM is outdated. Those things are integrated in LVM - so use it! | ||
- | Noadays, raid1 or raid5 for system | + | For our Setup we want to have the Linux Base System on Raid1, because Raid1 offers the flexibility to only have one phisical device that will work for its own without |
- | First, creat a volume | + | So first, create |
< | < | ||
vgcreate vgsystem /dev/sdX1 /dev/sdY1 | vgcreate vgsystem /dev/sdX1 /dev/sdY1 | ||
- | lvcreate -m1 --type raid1 -l 100%FREE | + | lvcreate -m1 --type raid1 -L 100GB -n lvroot vgsystem |
</ | </ | ||
Zeile 80: | Zeile 268: | ||
< | < | ||
- | # lvs -o+devices | + | # lvs -P -a -o +devices,segtype |
- | LV | + | LV VG |
- | | + | |
- | | + | [lvbackup_rimage_0] vgdata |
- | | + | [lvbackup_rimage_1] vgdata |
+ | | ||
+ | [lvbackup_rmeta_0] | ||
+ | [lvbackup_rmeta_1] | ||
+ | [lvbackup_rmeta_2] | ||
+ | lvdata | ||
+ | | ||
+ | [lvdata_rimage_1] | ||
+ | [lvdata_rimage_2] | ||
+ | [lvdata_rmeta_0] | ||
+ | [lvdata_rmeta_1] | ||
+ | [lvdata_rmeta_2] | ||
+ | lvdocker | ||
+ | [lvdocker_rimage_0] vgdata | ||
+ | [lvdocker_rimage_1] vgdata | ||
+ | [lvdocker_rimage_2] vgdata | ||
+ | [lvdocker_rmeta_0] | ||
+ | [lvdocker_rmeta_1] | ||
+ | [lvdocker_rmeta_2] | ||
+ | lvhome | ||
+ | [lvhome_rimage_0] | ||
+ | [lvhome_rimage_1] | ||
+ | [lvhome_rmeta_0] | ||
+ | [lvhome_rmeta_1] | ||
+ | lvroot | ||
+ | [lvroot_rimage_0] | ||
+ | [lvroot_rimage_1] | ||
+ | [lvroot_rmeta_0] | ||
+ | [lvroot_rmeta_1] | ||
</ | </ | ||
Zeile 106: | Zeile 322: | ||
==== Filesystem ==== | ==== Filesystem ==== | ||
- | Brtfs is the way to go everywhere. There are some disadvanteges while it is still in developement and sometime | + | Brtfs is the way to go everywhere |
+ | |||
+ | And there is one Reason: Docker - at the current time of writing this (20.04.2024) you should NOT USE BTRFS with Docker. More is explained later. | ||
==== Mountoptions ==== | ==== Mountoptions ==== | ||
Zeile 116: | Zeile 334: | ||
While autodefrag should not be necessary on ssd- harddiscs. | While autodefrag should not be necessary on ssd- harddiscs. | ||
- | For **Databases** or files that need speed and __**are well backed up otherwise**__ | + | For **Databases** |
=== Sources: === | === Sources: === | ||
Zeile 236: | Zeile 455: | ||
By default the umask is 0002 or 0022. Those values are substracted from 0777, which would mean full access for everyone. You can check out the docs in the net how they work. I won't explain here, cause there is a big problem with umask: The value can only be changed on process level or user or systemwide. This means you cannot set them per directory - which would be intentional to the user. | By default the umask is 0002 or 0022. Those values are substracted from 0777, which would mean full access for everyone. You can check out the docs in the net how they work. I won't explain here, cause there is a big problem with umask: The value can only be changed on process level or user or systemwide. This means you cannot set them per directory - which would be intentional to the user. | ||
- | So forget about umask. | + | So you should maybe think of setting a better umask than 022 - which would make all users of you group have read access to you files to lets say 077. Or - even better don't use the group " |
+ | |||
+ | On my system the umask can be defined in the file ''/ | ||
+ | |||
+ | But to go on directory- permissions: | ||
==== FACLs ==== | ==== FACLs ==== | ||
Zeile 355: | Zeile 578: | ||
That means you can only set the defaults per user or per group and only files or directories at once. | That means you can only set the defaults per user or per group and only files or directories at once. | ||
- | |||
=== FACL: use in batch and recursively === | === FACL: use in batch and recursively === | ||
Zeile 361: | Zeile 583: | ||
FACLs do also have good ways to be used for whole directories, | FACLs do also have good ways to be used for whole directories, | ||
- | '' | + | '' |
-R, –recursive Apply operations to all files and directories recursively. This option cannot be mixed with `–restore' | -R, –recursive Apply operations to all files and directories recursively. This option cannot be mixed with `–restore' | ||
Zeile 367: | Zeile 589: | ||
=== FACL: handle execute-bit with files and directories === | === FACL: handle execute-bit with files and directories === | ||
- | …it also allows for the use of the capital-x '' | + | …it also allows for the use of the capital-x '' |
so doing the following should work: | so doing the following should work: | ||
Zeile 378: | Zeile 600: | ||
'' | '' | ||
- | |||
==== Last words ==== | ==== Last words ==== |