Update Clusters From Scratch for pacemaker-3.0 / Alma Linux 10#4063
Update Clusters From Scratch for pacemaker-3.0 / Alma Linux 10#4063clumens wants to merge 10 commits intoClusterLabs:mainfrom
Conversation
* Update screenshots for Alma Linux 10. * Fix the Installation Destination screenshot to be correct instead of a copy of the Manual Partitioning screenshot. * Update instructions for ssh'ing since we can't do that as root by default anymore. * Minor updates to command output. Ref T910
* Add the rest of the "pcs status help" output. This isn't super important, but we say we're going to show all the options, so we should do that. * Update `pacemakerd --features` output. Ref T910
| No stonith devices and stonith-enabled is not false | ||
| error: Resource start-up disabled since no STONITH resources have been defined | ||
| error: Either configure some or disable STONITH with the stonith-enabled option | ||
| error: NOTE: Clusters with shared data need STONITH to ensure data integrity |
There was a problem hiding this comment.
I know these aren't the right error messages on main. This is what pcs on Alma Linux 10 produces, and I don't have a good sense of if it would all be one line or not. Also, the first message here probably needs to be changed to be current with the main branch too.
* GPG key location has changed. * SELinux policy no longer seems to be relevant (and the semanage command doesn't work anyway). * Fix a couple typos. * Command output (especially pcs) has changed. Ref T910
GFS2 is not available for RHEL 10, so it's not available for any of the related operating systems either. You've got to install it from source. That's beyond what we want to explain in CFS. However, I still want to leave the GFS documentation here for people that do install from source, or are following this document for an older release, or in case it becomes available again. I've updated the pcs commands and output as best I can, but I was unable to run any of these commands on my test RHEL 10 system to verify. Ref T910
IMO a very reasonable alternative would be to switch the demo to HA LVM (active/passive using the I'm not certain why CFS uses active/active storage in the first place. The top section mentions "DRBD as a cost-effective alternative to shared storage." So I guess the idea is to make Pacemaker clusters look as "simple" and cheap as possible, by not requiring true shared storage. ("Simple" is up for debate.) However, shared storage is not especially hard to configure using iSCSI. So CFS could say that the same block storage needs to be presented to all nodes (possibly including a very basic iSCSI tutorial to replace the DRBD tutorial, or pointing to an external one). Then do an active/passive storage+filesystem resource configuration. Thoughts? |
@nrwahl2 I was only able to update through the DRBD chapter. The Active/Active chapter requires installing gfs2, which we don't ship in RHEL 10 and therefore doesn't appear to be in any of the clones either. I haven't found a good solution to installing it anywhere, which makes me wonder what to do next. It's kind of a major portion of the CFS book.