From a37e10fa15a65f9d27ef3f924bf9f1c34d031cb6 Mon Sep 17 00:00:00 2001 From: Packit Date: Sep 04 2020 09:57:38 +0000 Subject: Apply patch bz1757115-gfs2_5_General_updates_and_layout_improvements.patch patch_name: bz1757115-gfs2_5_General_updates_and_layout_improvements.patch present_in_specfile: true --- diff --git a/gfs2/man/gfs2.5 b/gfs2/man/gfs2.5 index 56d1a00..436abc0 100644 --- a/gfs2/man/gfs2.5 +++ b/gfs2/man/gfs2.5 @@ -21,6 +21,20 @@ mounts which are equivalent to mounting a read-only block device and as such can neither recover a journal or write to the filesystem, so do not require a journal assigned to them. +The GFS2 documentation has been split into a number of sections: + +\fBmkfs.gfs2\fP(8) Create a GFS2 filesystem +.br +\fBfsck.gfs2\fP(8) The GFS2 filesystem checker +.br +\fBgfs2_grow\fP(8) Growing a GFS2 filesystem +.br +\fBgfs2_jadd\fP(8) Adding a journal to a GFS2 filesystem +.br +\fBtunegfs2\fP(8) Tool to manipulate GFS2 superblocks +.br +\fBgfs2_edit\fP(8) A GFS2 debug tool (use with caution) + .SH MOUNT OPTIONS .TP @@ -200,220 +214,55 @@ versa. Finally, when first enabling this option on a filesystem that had been previously mounted without it, you must make sure that there are no outstanding cookies being cached by other software, such as NFS. -.SH BUGS - -GFS2 doesn't support \fBerrors=\fP\fIremount-ro\fR or \fBdata=\fP\fIjournal\fR. -It is not possible to switch support for user and group quotas on and -off independently of each other. Some of the error messages are rather -cryptic, if you encounter one of these messages check firstly that gfs_controld -is running and secondly that you have enough journals on the filesystem -for the number of nodes in use. - -.SH SEE ALSO - -\fBmount\fP(8) for general mount options, -\fBchmod\fP(1) and \fBchmod\fP(2) for access permission flags, -\fBacl\fP(5) for access control lists, -\fBlvm\fP(8) for volume management, -\fBccs\fP(7) for cluster management, -\fBumount\fP(8), -\fBinitrd\fP(4). - -The GFS2 documentation has been split into a number of sections: - -\fBgfs2_edit\fP(8) A GFS2 debug tool (use with caution) -\fBfsck.gfs2\fP(8) The GFS2 file system checker -\fBgfs2_grow\fP(8) Growing a GFS2 file system -\fBgfs2_jadd\fP(8) Adding a journal to a GFS2 file system -\fBmkfs.gfs2\fP(8) Make a GFS2 file system -\fBgfs2_quota\fP(8) Manipulate GFS2 disk quotas -\fBgfs2_tool\fP(8) Tool to manipulate a GFS2 file system (obsolete) -\fBtunegfs2\fP(8) Tool to manipulate GFS2 superblocks - .SH SETUP -GFS2 clustering is driven by the dlm, which depends on dlm_controld to -provide clustering from userspace. dlm_controld clustering is built on -corosync cluster/group membership and messaging. - -Follow these steps to manually configure and run gfs2/dlm/corosync. - -.B 1. create /etc/corosync/corosync.conf and copy to all nodes - -In this sample, replace cluster_name and IP addresses, and add nodes as -needed. If using only two nodes, uncomment the two_node line. -See corosync.conf(5) for more information. - -.nf -totem { - version: 2 - secauth: off - cluster_name: abc -} - -nodelist { - node { - ring0_addr: 10.10.10.1 - nodeid: 1 - } - node { - ring0_addr: 10.10.10.2 - nodeid: 2 - } - node { - ring0_addr: 10.10.10.3 - nodeid: 3 - } -} - -quorum { - provider: corosync_votequorum -# two_node: 1 -} - -logging { - to_syslog: yes -} -.fi - -.PP - -.B 2. start corosync on all nodes - -.nf -systemctl start corosync -.fi - -Run corosync-quorumtool to verify that all nodes are listed. - -.PP - -.B 3. create /etc/dlm/dlm.conf and copy to all nodes - -.B * -To use no fencing, use this line: +GFS2 clustering is driven by the dlm, which depends on dlm_controld to provide +clustering from userspace. dlm_controld clustering is built on corosync +cluster/group membership and messaging. GFS2 also requires clustered lvm which +is provided by lvmlockd or, previously, clvmd. Refer to the documentation for +each of these components and ensure that they are configured before setting up +a GFS2 filesystem. Also refer to your distribution's documentation for any +specific support requirements. -.nf -enable_fencing=0 -.fi +Ensure that gfs2-utils is installed on all nodes which mount the filesystem as +it provides scripts required for correct withdraw event response. -.B * -To use no fencing, but exercise fencing functions, use this line: - -.nf -fence_all /bin/true -.fi - -The "true" binary will be executed for all nodes and will succeed (exit 0) -immediately. - -.B * -To use manual fencing, use this line: - -.nf -fence_all /bin/false -.fi - -The "false" binary will be executed for all nodes and will fail (exit 1) -immediately. - -When a node fails, manually run: dlm_tool fence_ack - -.B * -To use stonith/pacemaker for fencing, use this line: - -.nf -fence_all /usr/sbin/dlm_stonith -.fi - -The "dlm_stonith" binary will be executed for all nodes. If -stonith/pacemaker systems are not available, dlm_stonith will fail and -this config becomes the equivalent of the previous /bin/false config. - -.B * -To use an APC power switch, use these lines: - -.nf -device apc /usr/sbin/fence_apc ipaddr=1.1.1.1 login=admin password=pw -connect apc node=1 port=1 -connect apc node=2 port=2 -connect apc node=3 port=3 -.fi - -Other network switch based agents are configured similarly. - -.B * -To use sanlock/watchdog fencing, use these lines: - -.nf -device wd /usr/sbin/fence_sanlock path=/dev/fence/leases -connect wd node=1 host_id=1 -connect wd node=2 host_id=2 -unfence wd -.fi - -See fence_sanlock(8) for more information. - -.B * -For other fencing configurations see dlm.conf(5) man page. - -.PP - -.B 4. start dlm_controld on all nodes - -.nf -systemctl start dlm -.fi - -Run "dlm_tool status" to verify that all nodes are listed. - -.PP - -.B 5. if using clvm, start clvmd on all nodes - -systemctl clvmd start - -.PP - -.B 6. make new gfs2 file systems +.B 1. Create the gfs2 filesystem mkfs.gfs2 -p lock_dlm -t cluster_name:fs_name -j num /path/to/storage -The cluster_name must match the name used in step 1 above. -The fs_name must be a unique name in the cluster. -The -j option is the number of journals to create, there must -be one for each node that will mount the fs. +The cluster_name must match the name configured in corosync (and thus dlm). +The fs_name must be a unique name for the filesystem in the cluster. +The -j option is the number of journals to create; there must +be one for each node that will mount the filesystem. .PP +.B 2. Mount the gfs2 filesystem -.B 7. mount gfs2 file systems +If you are using a clustered resource manager, see its documentation for +enabling a gfs2 filesystem resource. Otherwise, run: mount /path/to/storage /mountpoint Run "dlm_tool ls" to verify the nodes that have each fs mounted. .PP +.B 3. Shut down -.B 8. shut down +If you are using a clustered resource manager, see its documentation for +disabling a gfs2 filesystem resource. Otherwise, run: -.nf umount -a -t gfs2 -systemctl clvmd stop -systemctl dlm stop -systemctl corosync stop -.fi .PP +.SH SEE ALSO -.B More setup information: -.br -.BR dlm_controld (8), -.br -.BR dlm_tool (8), -.br -.BR dlm.conf (5), -.br -.BR corosync (8), -.br -.BR corosync.conf (5) -.br +\fBmount\fP(8) and \fBumount\fP(8) for general mount information, +\fBchmod\fP(1) and \fBchmod\fP(2) for access permission flags, +\fBacl\fP(5) for access control lists, +\fBlvm\fP(8) for volume management, +\fBdlm_controld\fP(8), +\fBdlm_tool\fP(8), +\fBdlm.conf\fP(5), +\fBcorosync\fP(8), +\fBcorosync.conf\fP(5), diff --git a/gfs2/man/gfs2.5.bz1757115-gfs2_5_General_updates_and_layout_improvements b/gfs2/man/gfs2.5.bz1757115-gfs2_5_General_updates_and_layout_improvements new file mode 100644 index 0000000..56d1a00 --- /dev/null +++ b/gfs2/man/gfs2.5.bz1757115-gfs2_5_General_updates_and_layout_improvements @@ -0,0 +1,419 @@ +.TH gfs2 5 + +.SH NAME +gfs2 \- GFS2 reference guide + +.SH SYNOPSIS +Overview of the GFS2 filesystem + +.SH DESCRIPTION + +GFS2 is a clustered filesystem, designed for sharing data between +multiple nodes +connected to a common shared storage device. It can also be used as a +local filesystem on a single node, however since the design is aimed +at clusters, that will usually result in lower performance than using +a filesystem designed specifically for single node use. + +GFS2 is a journaling filesystem and one journal is required for each node +that will mount the filesystem. The one exception to that is spectator +mounts which are equivalent to mounting a read-only block device and as +such can neither recover a journal or write to the filesystem, so do not +require a journal assigned to them. + +.SH MOUNT OPTIONS + +.TP +\fBlockproto=\fP\fILockProtoName\fR +This specifies which inter-node lock protocol is used by the GFS2 filesystem +for this mount, overriding the default lock protocol name stored in the +filesystem's on-disk superblock. + +The \fILockProtoName\fR must be one of the supported locking protocols, +currently these are \fIlock_nolock\fR and \fIlock_dlm\fR. + +The default lock protocol name is written to disk initially when creating the +filesystem with \fBmkfs.gfs2\fP(8), -p option. It can be changed on-disk by +using the \fBgfs2_tool\fP(8) utility's \fBsb proto\fP command. + +The \fBlockproto\fP mount option should be used only under special +circumstances in which you want to temporarily use a different lock protocol +without changing the on-disk default. Using the incorrect lock protocol +on a cluster filesystem mounted from more than one node will almost +certainly result in filesystem corruption. +.TP +\fBlocktable=\fP\fILockTableName\fR +This specifies the identity of the cluster and of the filesystem for this +mount, overriding the default cluster/filesystem identify stored in the +filesystem's on-disk superblock. The cluster/filesystem name is recognized +globally throughout the cluster, and establishes a unique namespace for +the inter-node locking system, enabling the mounting of multiple GFS2 +filesystems. + +The format of \fILockTableName\fR is lock-module-specific. For +\fIlock_dlm\fR, the format is \fIclustername:fsname\fR. For +\fIlock_nolock\fR, the field is ignored. + +The default cluster/filesystem name is written to disk initially when creating +the filesystem with \fBmkfs.gfs2\fP(8), -t option. It can be changed on-disk +by using the \fBgfs2_tool\fP(8) utility's \fBsb table\fP command. + +The \fBlocktable\fP mount option should be used only under special +circumstances in which you want to mount the filesystem in a different cluster, +or mount it as a different filesystem name, without changing the on-disk +default. +.TP +\fBlocalflocks\fP +This flag tells GFS2 that it is running as a local (not clustered) filesystem, +so it can allow the kernel VFS layer to do all flock and fcntl file locking. +When running in cluster mode, these file locks require inter-node locks, +and require the support of GFS2. When running locally, better performance +is achieved by letting VFS handle the whole job. + +This is turned on automatically by the lock_nolock module. +.TP +\fBerrors=\fP\fI[panic|withdraw]\fR +Setting errors=panic causes GFS2 to oops when encountering an error that +would otherwise cause the +mount to withdraw or print an assertion warning. The default setting +is errors=withdraw. This option should not be used in a production system. +It replaces the earlier \fBdebug\fP option on kernel versions 2.6.31 and +above. +.TP +\fBacl\fP +Enables POSIX Access Control List \fBacl\fP(5) support within GFS2. +.TP +\fBspectator\fP +Mount this filesystem using a special form of read-only mount. The mount +does not use one of the filesystem's journals. The node is unable to +recover journals for other nodes. +.TP +\fBnorecovery\fP +A synonym for spectator +.TP +\fBsuiddir\fP +Sets owner of any newly created file or directory to be that of parent +directory, if parent directory has S_ISUID permission attribute bit set. +Sets S_ISUID in any new directory, if its parent directory's S_ISUID is set. +Strips all execution bits on a new file, if parent directory owner is different +from owner of process creating the file. Set this option only if you know +why you are setting it. +.TP +\fBquota=\fP\fI[off/account/on]\fR +Turns quotas on or off for a filesystem. Setting the quotas to be in +the "account" state causes the per UID/GID usage statistics to be +correctly maintained by the filesystem, limit and warn values are +ignored. The default value is "off". +.TP +\fBdiscard\fP +Causes GFS2 to generate "discard" I/O requests for blocks which have +been freed. These can be used by suitable hardware to implement +thin-provisioning and similar schemes. This feature is supported +in kernel version 2.6.30 and above. +.TP +\fBbarrier\fP +This option, which defaults to on, causes GFS2 to send I/O barriers +when flushing the journal. The option is automatically turned off +if the underlying device does not support I/O barriers. We highly +recommend the use of I/O barriers with GFS2 at all times unless +the block device is designed so that it cannot lose its write cache +content (e.g. its on a UPS, or it doesn't have a write cache) +.TP +\fBcommit=\fP\fIsecs\fR +This is similar to the ext3 \fBcommit=\fP option in that it sets +the maximum number of seconds between journal commits if there is +dirty data in the journal. The default is 60 seconds. This option +is only provided in kernel versions 2.6.31 and above. +.TP +\fBdata=\fP\fI[ordered|writeback]\fR +When data=ordered is set, the user data modified by a transaction is +flushed to the disk before the transaction is committed to disk. This +should prevent the user from seeing uninitialized blocks in a file +after a crash. Data=writeback mode writes the user data to the disk +at any time after it's dirtied. This doesn't provide the same +consistency guarantee as ordered mode, but it should be slightly +faster for some workloads. The default is ordered mode. +.TP +\fBmeta\fP +This option results in selecting the meta filesystem root rather than +the normal filesystem root. This option is normally only used by +the GFS2 utility functions. Altering any file on the GFS2 meta filesystem +may render the filesystem unusable, so only experts in the GFS2 +on-disk layout should use this option. +.TP +\fBquota_quantum=\fP\fIsecs\fR +This sets the number of seconds for which a change in the quota +information may sit on one node before being written to the quota +file. This is the preferred way to set this parameter. The value +is an integer number of seconds greater than zero. The default is +60 seconds. Shorter settings result in faster updates of the lazy +quota information and less likelihood of someone exceeding their +quota. Longer settings make filesystem operations involving quotas +faster and more efficient. +.TP +\fBstatfs_quantum=\fP\fIsecs\fR +Setting statfs_quantum to 0 is the preferred way to set the slow version +of statfs. The default value is 30 secs which sets the maximum time +period before statfs changes will be syned to the master statfs file. +This can be adjusted to allow for faster, less accurate statfs values +or slower more accurate values. When set to 0, statfs will always +report the true values. +.TP +\fBstatfs_percent=\fP\fIvalue\fR +This setting provides a bound on the maximum percentage change in +the statfs information on a local basis before it is synced back +to the master statfs file, even if the time period has not +expired. If the setting of statfs_quantum is 0, then this setting +is ignored. +.TP +\fBrgrplvb\fP +This flag tells gfs2 to look for information about a resource group's free +space and unlinked inodes in its glock lock value block. This keeps gfs2 from +having to read in the resource group data from disk, speeding up allocations in +some cases. This option was added in the 3.6 Linux kernel. Prior to this +kernel, no information was saved to the resource group lvb. \fBNote:\fP To +safely turn on this option, all nodes mounting the filesystem must be running +at least a 3.6 Linux kernel. If any nodes had previously mounted the filesystem +using older kernels, the filesystem must be unmounted on all nodes before it +can be mounted with this option enabled. This option does not need to be +enabled on all nodes using a filesystem. +.TP +\fBloccookie\fP +This flag tells gfs2 to use location based readdir cookies, instead of its +usual filename hash readdir cookies. The filename hash cookies are not +guaranteed to be unique, and as the number of files in a directory increases, +so does the likelihood of a collision. NFS requires readdir cookies to be +unique, which can cause problems with very large directories (over 100,000 +files). With this flag set, gfs2 will try to give out location based cookies. +Since the cookie is 31 bits, gfs2 will eventually run out of unique cookies, +and will fail back to using hash cookies. The maximum number of files that +could have unique location cookies assuming perfectly even hashing and names of +8 or fewer characters is 1,073,741,824. An average directory should be able to +give out well over half a billion location based cookies. This option was added +in the 4.5 Linux kernel. Prior to this kernel, gfs2 did not add directory +entries in a way that allowed it to use location based readdir cookies. +\fBNote:\fP To safely turn on this option, all nodes mounting the filesystem +must be running at least a 4.5 Linux kernel. If this option is only enabled on +some of the nodes mounting a filesystem, the cookies returned by nodes using +this option will not be valid on nodes that are not using this option, and vice +versa. Finally, when first enabling this option on a filesystem that had been +previously mounted without it, you must make sure that there are no outstanding +cookies being cached by other software, such as NFS. + +.SH BUGS + +GFS2 doesn't support \fBerrors=\fP\fIremount-ro\fR or \fBdata=\fP\fIjournal\fR. +It is not possible to switch support for user and group quotas on and +off independently of each other. Some of the error messages are rather +cryptic, if you encounter one of these messages check firstly that gfs_controld +is running and secondly that you have enough journals on the filesystem +for the number of nodes in use. + +.SH SEE ALSO + +\fBmount\fP(8) for general mount options, +\fBchmod\fP(1) and \fBchmod\fP(2) for access permission flags, +\fBacl\fP(5) for access control lists, +\fBlvm\fP(8) for volume management, +\fBccs\fP(7) for cluster management, +\fBumount\fP(8), +\fBinitrd\fP(4). + +The GFS2 documentation has been split into a number of sections: + +\fBgfs2_edit\fP(8) A GFS2 debug tool (use with caution) +\fBfsck.gfs2\fP(8) The GFS2 file system checker +\fBgfs2_grow\fP(8) Growing a GFS2 file system +\fBgfs2_jadd\fP(8) Adding a journal to a GFS2 file system +\fBmkfs.gfs2\fP(8) Make a GFS2 file system +\fBgfs2_quota\fP(8) Manipulate GFS2 disk quotas +\fBgfs2_tool\fP(8) Tool to manipulate a GFS2 file system (obsolete) +\fBtunegfs2\fP(8) Tool to manipulate GFS2 superblocks + +.SH SETUP + +GFS2 clustering is driven by the dlm, which depends on dlm_controld to +provide clustering from userspace. dlm_controld clustering is built on +corosync cluster/group membership and messaging. + +Follow these steps to manually configure and run gfs2/dlm/corosync. + +.B 1. create /etc/corosync/corosync.conf and copy to all nodes + +In this sample, replace cluster_name and IP addresses, and add nodes as +needed. If using only two nodes, uncomment the two_node line. +See corosync.conf(5) for more information. + +.nf +totem { + version: 2 + secauth: off + cluster_name: abc +} + +nodelist { + node { + ring0_addr: 10.10.10.1 + nodeid: 1 + } + node { + ring0_addr: 10.10.10.2 + nodeid: 2 + } + node { + ring0_addr: 10.10.10.3 + nodeid: 3 + } +} + +quorum { + provider: corosync_votequorum +# two_node: 1 +} + +logging { + to_syslog: yes +} +.fi + +.PP + +.B 2. start corosync on all nodes + +.nf +systemctl start corosync +.fi + +Run corosync-quorumtool to verify that all nodes are listed. + +.PP + +.B 3. create /etc/dlm/dlm.conf and copy to all nodes + +.B * +To use no fencing, use this line: + +.nf +enable_fencing=0 +.fi + +.B * +To use no fencing, but exercise fencing functions, use this line: + +.nf +fence_all /bin/true +.fi + +The "true" binary will be executed for all nodes and will succeed (exit 0) +immediately. + +.B * +To use manual fencing, use this line: + +.nf +fence_all /bin/false +.fi + +The "false" binary will be executed for all nodes and will fail (exit 1) +immediately. + +When a node fails, manually run: dlm_tool fence_ack + +.B * +To use stonith/pacemaker for fencing, use this line: + +.nf +fence_all /usr/sbin/dlm_stonith +.fi + +The "dlm_stonith" binary will be executed for all nodes. If +stonith/pacemaker systems are not available, dlm_stonith will fail and +this config becomes the equivalent of the previous /bin/false config. + +.B * +To use an APC power switch, use these lines: + +.nf +device apc /usr/sbin/fence_apc ipaddr=1.1.1.1 login=admin password=pw +connect apc node=1 port=1 +connect apc node=2 port=2 +connect apc node=3 port=3 +.fi + +Other network switch based agents are configured similarly. + +.B * +To use sanlock/watchdog fencing, use these lines: + +.nf +device wd /usr/sbin/fence_sanlock path=/dev/fence/leases +connect wd node=1 host_id=1 +connect wd node=2 host_id=2 +unfence wd +.fi + +See fence_sanlock(8) for more information. + +.B * +For other fencing configurations see dlm.conf(5) man page. + +.PP + +.B 4. start dlm_controld on all nodes + +.nf +systemctl start dlm +.fi + +Run "dlm_tool status" to verify that all nodes are listed. + +.PP + +.B 5. if using clvm, start clvmd on all nodes + +systemctl clvmd start + +.PP + +.B 6. make new gfs2 file systems + +mkfs.gfs2 -p lock_dlm -t cluster_name:fs_name -j num /path/to/storage + +The cluster_name must match the name used in step 1 above. +The fs_name must be a unique name in the cluster. +The -j option is the number of journals to create, there must +be one for each node that will mount the fs. + +.PP + +.B 7. mount gfs2 file systems + +mount /path/to/storage /mountpoint + +Run "dlm_tool ls" to verify the nodes that have each fs mounted. + +.PP + +.B 8. shut down + +.nf +umount -a -t gfs2 +systemctl clvmd stop +systemctl dlm stop +systemctl corosync stop +.fi + +.PP + +.B More setup information: +.br +.BR dlm_controld (8), +.br +.BR dlm_tool (8), +.br +.BR dlm.conf (5), +.br +.BR corosync (8), +.br +.BR corosync.conf (5) +.br