/usr/lib/python3/dist-packages/testscenarios-0.4.egg-info/PKG-INFO is in python3-testscenarios 0.4-2ubuntu2.
This file is owned by root:root, with mode 0o644.
The actual contents of the file can be viewed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 | Metadata-Version: 1.1
Name: testscenarios
Version: 0.4
Summary: Testscenarios, a pyunit extension for dependency injection
Home-page: https://launchpad.net/testscenarios
Author: Robert Collins
Author-email: robertc@robertcollins.net
License: UNKNOWN
Description: *****************************************************************
testscenarios: extensions to python unittest to support scenarios
*****************************************************************
Copyright (c) 2009, Robert Collins <robertc@robertcollins.net>
Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
license at the users choice. A copy of both licenses are available in the
project source as Apache-2.0 and BSD. You may not use this file except in
compliance with one of these two licences.
Unless required by applicable law or agreed to in writing, software
distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
license you chose for the specific language governing permissions and
limitations under that license.
testscenarios provides clean dependency injection for python unittest style
tests. This can be used for interface testing (testing many implementations via
a single test suite) or for classic dependency injection (provide tests with
dependencies externally to the test code itself, allowing easy testing in
different situations).
Dependencies
============
* Python 2.4+
* testtools <https://launchpad.net/testtools>
Why TestScenarios
=================
Standard Python unittest.py provides on obvious method for running a single
test_foo method with two (or more) scenarios: by creating a mix-in that
provides the functions, objects or settings that make up the scenario. This is
however limited and unsatisfying. Firstly, when two projects are cooperating
on a test suite (for instance, a plugin to a larger project may want to run
the standard tests for a given interface on its implementation), then it is
easy for them to get out of sync with each other: when the list of TestCase
classes to mix-in with changes, the plugin will either fail to run some tests
or error trying to run deleted tests. Secondly, its not as easy to work with
runtime-created-subclasses (a way of dealing with the aforementioned skew)
because they require more indirection to locate the source of the test, and will
often be ignored by e.g. pyflakes pylint etc.
It is the intent of testscenarios to make dynamically running a single test
in multiple scenarios clear, easy to debug and work with even when the list
of scenarios is dynamically generated.
Defining Scenarios
==================
A **scenario** is a tuple of a string name for the scenario, and a dict of
parameters describing the scenario. The name is appended to the test name, and
the parameters are made available to the test instance when it's run.
Scenarios are presented in **scenario lists** which are typically Python lists
but may be any iterable.
Getting Scenarios applied
=========================
At its heart the concept is simple. For a given test object with a list of
scenarios we prepare a new test object for each scenario. This involves:
* Clone the test to a new test with a new id uniquely distinguishing it.
* Apply the scenario to the test by setting each key, value in the scenario
as attributes on the test object.
There are some complicating factors around making this happen seamlessly. These
factors are in two areas:
* Choosing what scenarios to use. (See Setting Scenarios For A Test).
* Getting the multiplication to happen.
Subclasssing
++++++++++++
If you can subclass TestWithScenarios, then the ``run()`` method in
TestWithScenarios will take care of test multiplication. It will at test
execution act as a generator causing multiple tests to execute. For this to
work reliably TestWithScenarios must be first in the MRO and you cannot
override run() or __call__. This is the most robust method, in the sense
that any test runner or test loader that obeys the python unittest protocol
will run all your scenarios.
Manual generation
+++++++++++++++++
If you cannot subclass TestWithScenarios (e.g. because you are using
TwistedTestCase, or TestCaseWithResources, or any one of a number of other
useful test base classes, or need to override run() or __call__ yourself) then
you can cause scenario application to happen later by calling
``testscenarios.generate_scenarios()``. For instance::
>>> import unittest
>>> try:
... from StringIO import StringIO
... except ImportError:
... from io import StringIO
>>> from testscenarios.scenarios import generate_scenarios
This can work with loaders and runners from the standard library, or possibly other
implementations::
>>> loader = unittest.TestLoader()
>>> test_suite = unittest.TestSuite()
>>> runner = unittest.TextTestRunner(stream=StringIO())
>>> mytests = loader.loadTestsFromNames(['doc.test_sample'])
>>> test_suite.addTests(generate_scenarios(mytests))
>>> runner.run(test_suite)
<unittest...TextTestResult run=1 errors=0 failures=0>
Testloaders
+++++++++++
Some test loaders support hooks like ``load_tests`` and ``test_suite``.
Ensuring your tests have had scenario application done through these hooks can
be a good idea - it means that external test runners (which support these hooks
like ``nose``, ``trial``, ``tribunal``) will still run your scenarios. (Of
course, if you are using the subclassing approach this is already a surety).
With ``load_tests``::
>>> def load_tests(standard_tests, module, loader):
... result = loader.suiteClass()
... result.addTests(generate_scenarios(standard_tests))
... return result
as a convenience, this is available in ``load_tests_apply_scenarios``, so a
module using scenario tests need only say ::
>>> from testscenarios import load_tests_apply_scenarios as load_tests
Python 2.7 and greater support a different calling convention for `load_tests``
<https://bugs.launchpad.net/bzr/+bug/607412>. `load_tests_apply_scenarios`
copes with both.
With ``test_suite``::
>>> def test_suite():
... loader = TestLoader()
... tests = loader.loadTestsFromName(__name__)
... result = loader.suiteClass()
... result.addTests(generate_scenarios(tests))
... return result
Setting Scenarios for a test
============================
A sample test using scenarios can be found in the doc/ folder.
See `pydoc testscenarios` for details.
On the TestCase
+++++++++++++++
You can set a scenarios attribute on the test case::
>>> class MyTest(unittest.TestCase):
...
... scenarios = [
... ('scenario1', dict(param=1)),
... ('scenario2', dict(param=2)),]
This provides the main interface by which scenarios are found for a given test.
Subclasses will inherit the scenarios (unless they override the attribute).
After loading
+++++++++++++
Test scenarios can also be generated arbitrarily later, as long as the test has
not yet run. Simply replace (or alter, but be aware that many tests may share a
single scenarios attribute) the scenarios attribute. For instance in this
example some third party tests are extended to run with a custom scenario. ::
>>> import testtools
>>> class TestTransport:
... """Hypothetical test case for bzrlib transport tests"""
... pass
...
>>> stock_library_tests = unittest.TestLoader().loadTestsFromNames(
... ['doc.test_sample'])
...
>>> for test in testtools.iterate_tests(stock_library_tests):
... if isinstance(test, TestTransport):
... test.scenarios = test.scenarios + [my_vfs_scenario]
...
>>> suite = unittest.TestSuite()
>>> suite.addTests(generate_scenarios(stock_library_tests))
Generated tests don't have a ``scenarios`` list, because they don't normally
require any more expansion. However, you can add a ``scenarios`` list back on
to them, and then run them through ``generate_scenarios`` again to generate the
cross product of tests. ::
>>> class CrossProductDemo(unittest.TestCase):
... scenarios = [('scenario_0_0', {}),
... ('scenario_0_1', {})]
... def test_foo(self):
... return
...
>>> suite = unittest.TestSuite()
>>> suite.addTests(generate_scenarios(CrossProductDemo("test_foo")))
>>> for test in testtools.iterate_tests(suite):
... test.scenarios = [
... ('scenario_1_0', {}),
... ('scenario_1_1', {})]
...
>>> suite2 = unittest.TestSuite()
>>> suite2.addTests(generate_scenarios(suite))
>>> print(suite2.countTestCases())
4
Dynamic Scenarios
+++++++++++++++++
A common use case is to have the list of scenarios be dynamic based on plugins
and available libraries. An easy way to do this is to provide a global scope
scenarios somewhere relevant to the tests that will use it, and then that can
be customised, or dynamically populate your scenarios from a registry etc.
For instance::
>>> hash_scenarios = []
>>> try:
... from hashlib import md5
... except ImportError:
... pass
... else:
... hash_scenarios.append(("md5", dict(hash=md5)))
>>> try:
... from hashlib import sha1
... except ImportError:
... pass
... else:
... hash_scenarios.append(("sha1", dict(hash=sha1)))
...
>>> class TestHashContract(unittest.TestCase):
...
... scenarios = hash_scenarios
...
>>> class TestHashPerformance(unittest.TestCase):
...
... scenarios = hash_scenarios
Forcing Scenarios
+++++++++++++++++
The ``apply_scenarios`` function can be useful to apply scenarios to a test
that has none applied. ``apply_scenarios`` is the workhorse for
``generate_scenarios``, except it takes the scenarios passed in rather than
introspecting the test object to determine the scenarios. The
``apply_scenarios`` function does not reset the test scenarios attribute,
allowing it to be used to layer scenarios without affecting existing scenario
selection.
Generating Scenarios
====================
Some functions (currently one :-) are available to ease generation of scenario
lists for common situations.
Testing Per Implementation Module
+++++++++++++++++++++++++++++++++
It is reasonably common to have multiple Python modules that provide the same
capabilities and interface, and to want apply the same tests to all of them.
In some cases, not all of the statically defined implementations will be able
to be used in a particular testing environment. For example, there may be both
a C and a pure-Python implementation of a module. You want to test the C
module if it can be loaded, but also to have the tests pass if the C module has
not been compiled.
The ``per_module_scenarios`` function generates a scenario for each named
module. The module object of the imported module is set in the supplied
attribute name of the resulting scenario.
Modules which raise ``ImportError`` during import will have the
``sys.exc_info()`` of the exception set instead of the module object. Tests
can check for the attribute being a tuple to decide what to do (e.g. to skip).
Note that for the test to be valid, all access to the module under test must go
through the relevant attribute of the test object. If one of the
implementations is also directly imported by the test module or any other,
testscenarios will not magically stop it being used.
Advice on Writing Scenarios
===========================
If a parameterised test is because of a bug run without being parameterized,
it should fail rather than running with defaults, because this can hide bugs.
Producing Scenarios
===================
The `multiply_scenarios` function produces the cross-product of the scenarios
passed in::
>>> from testscenarios.scenarios import multiply_scenarios
>>>
>>> scenarios = multiply_scenarios(
... [('scenario1', dict(param1=1)), ('scenario2', dict(param1=2))],
... [('scenario2', dict(param2=1))],
... )
>>> scenarios == [('scenario1,scenario2', {'param2': 1, 'param1': 1}),
... ('scenario2,scenario2', {'param2': 1, 'param1': 2})]
True
Platform: UNKNOWN
Classifier: Development Status :: 6 - Mature
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: BSD License
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Software Development :: Quality Assurance
Classifier: Topic :: Software Development :: Testing
|