This file is indexed.

/usr/share/pacemaker/tests/cts/README is in pacemaker-dev 1.1.6-2ubuntu3.

This file is owned by root:root, with mode 0o644.

The actual contents of the file can be viewed below.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
BASIC REQUIREMENTS BEFORE STARTING:

Three or more machines:  one test exerciser and two or more test cluster machines.

	The two test cluster machines need to be on the same subnet
		and they should have journalling filesystems for
		all of their filesystems other than /boot
	You also need a number of free IP addresses on that subnet to test
		mutual IP address takeover

	The test exerciser machine doesn't need to be on the same subnet
	as the test cluster machines.  Minimal demands are made on the 
	exerciser machine - it just has to stay up during the tests.
	However, it does need to have a current copy of the cts test
	scripts.  It is worth noting that these scripts are coordinated
	with particular versions of Pacemaker, so that in general you
	have to have the same version of test scripts as the rest of 
	Pacemaker.
	

Install Pacemaker on all machines.  

Configure cluster communications (Corosync, CMAN or Heartbeat) on the
cluster machines and verify everything works.

NOTE: Do not run the cluster on the test exerciser machine.

NOTE: Wherever machine names are mentioned in these configuration files,
they must match the machines' `uname -n` name.  This may or may not match
the machines' FQDN (fully qualified domain name) - it depends on how
you (and your OS) have named the machines. 

It helps a lot in tracking problems if the three machines' clocks are
closely synchronized.  xntpd does this, but you can do it by hand if
you want.

Make sure all your filesystems are journalling filesystems (/boot can be
ext2 if you want).  This means filesystems like ext3.


Here's what you need to do to run CTS:

The exerciser needs to be able to ssh over to the cluster nodes as root
without a password challenge.  Configure ssh accordingly.
	(see the Mini-HOWTOs at the end for more details)

The exerciser needs to be able to resolve the machine names of the
test cluster - either by DNS or by /etc/hosts.


Now assuming you did all this, what you need to do is run CTSlab.py

    python ./CTSlab.py [options] number-of-tests-to-run

You must specify which nodes are part of the cluster:
   --nodes, eg. --node "pcmk-1 pcmk-2 pcmk-3"

Most people will want to save the output:
   --outputfile, eg. --outputfile ~/cts.log

Unless you want to test your own cluster configuration, you will also want:
   --clobber-cib
   --populate-resources
   --test-ip-base, eg. --test-ip-base 192.168.9.100

   and configure some sort of fencing:
   --stonith, eg. --stonith rhcs to use fence_xvm or --stonith lha to use external/ssh

A complete command line might look like:
  
  python ./CTSlab.py --nodes "pcmk-1 pcmk-2 pcmk-3" --outputfile ~/cts.log \
	 --clobber-cib --populate-resources --test-ip-base 192.168.9.100   \
	 --stonith rhcs 50


For other options, use the --help option and see the Mini-HOWTOs at the end for more details on setting up external/ssh.

HINT: To extract the result of a particular test, run:
  crm_report -T $test




==============
Mini-HOWTOs:
==============

--------------------------------------------------------------------------------
How to make OpenSSH allow you to login as root across the network without
a password.
--------------------------------------------------------------------------------

All our scripts run ssh -l root, so you don't have to do any of your testing
logged in as root on the test machine

1)	Grab your key from the exerciser machine:

	take the single line out of ~/.ssh/identity.pub
	and put it into root's authorized_keys file.
	[This has changed to: copying the line from ~/.ssh/id_dsa.pub into
		root's authorized_keys file ]

	NOTE: If you don't have an id_dsa.pub file, create it by running:

		ssh-keygen -t dsa

2)	Run this command on each of the cluster machines as root:

ssh -v -l myid ererciser-machine cat /home/myid/.ssh/identity.pub \
	>> ~root/.ssh/authorized_keys

[For most people, this has changed to:
  ssh -v -l myid exerciser-machine cat /home/myid/.ssh/id_dsa.pub \
	>> ~root/.ssh/authorized_keys
]

	You will probably have to provide your password, and possibly say
	"yes" to some questions about accepting the identity of the
	test machines

3)	You must also do the corresponding update for the exerciser 
	machine itself as root:

	cat /home/myid/.ssh/identity.pub >> ~root/.ssh/authorized_keys

	To test this, try this command from the exerciser machine for each
	of your cluster machines, and for the exerciser machine itself.

ssh -l root cluster-machine

If this works without prompting for a password, you're in business...
If not, you need to look at the ssh/openssh documentation and the output from
the -v options above...

--------------------------------------------------------------------------------
How to configure OpenSSH for StonithdTest
--------------------------------------------------------------------------------

This configure enables cluster machines to ssh over to each other without a
password challenge.

1)	On each of the cluster machines, grab your key:

	take the single line out of ~/.ssh/identity.pub
	and put it into root's authorized_keys file.
	[This has changed to: copying the line from ~/.ssh/id_dsa.pub into
		root's authorized_keys file ]

	NOTE: If you don't have an id_dsa.pub file, create it by running:

		ssh-keygen -t dsa

2)	Run this command on each of the cluster machines as root:

ssh -v -l myid cluster_machine_1 cat /home/myid/.ssh/identity.pub \
	>> ~root/.ssh/authorized_keys

ssh -v -l myid cluster_machine_2 cat /home/myid/.ssh/identity.pub \
	>> ~root/.ssh/authorized_keys

......

ssh -v -l myid cluster_machine_n cat /home/myid/.ssh/identity.pub \
	>> ~root/.ssh/authorized_keys

[For most people, this has changed to:
  ssh -v -l myid cluster_machine cat /home/myid/.ssh/id_dsa.pub \
	>> ~root/.ssh/authorized_keys
]

	You will probably have to provide your password, and possibly say
	"yes" to some questions about accepting the identity of the
	test machines

To test this, try this command from any machine for each
of other cluster machines, and for the machine itself.

    ssh -l root cluster-machine

This should work without prompting for a password,
If not, you need to look at the ssh/openssh documentation and the output from
the -v options above...

3)  Make sure the 'at' daemon is enabled on the test cluster machines 

This is normally the 'atd' service started by /etc/init.d/atd).  This
doesn't mean just start it, it means enable it to start on every boot
into your default init state (probably either 3 or 5). 

Usually this can be achieved with:
   chkconfig --add atd
   chkconfig atd on