/usr/lib/python3/dist-packages/parsel-1.0.3.egg-info/PKG-INFO is in python3-parsel 1.0.3-2.
This file is owned by root:root, with mode 0o644.
The actual contents of the file can be viewed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 | Metadata-Version: 1.1
Name: parsel
Version: 1.0.3
Summary: Parsel is a library to extract data from HTML and XML using XPath and CSS selectors
Home-page: https://github.com/scrapy/parsel
Author: Scrapy project
Author-email: info@scrapy.org
License: BSD
Description: ===============================
Parsel
===============================
.. image:: https://img.shields.io/travis/scrapy/parsel.svg
:target: https://travis-ci.org/scrapy/parsel
.. image:: https://img.shields.io/pypi/v/parsel.svg
:target: https://pypi.python.org/pypi/parsel
.. image:: https://img.shields.io/codecov/c/github/scrapy/parsel/master.svg
:target: http://codecov.io/github/scrapy/parsel?branch=master
:alt: Coverage report
Parsel is a library to extract data from HTML and XML using XPath and CSS selectors
* Free software: BSD license
* Documentation: https://parsel.readthedocs.org.
Features
--------
* Extract text using CSS or XPath selectors
* Regular expression helper methods
Example::
>>> from parsel import Selector
>>> sel = Selector(text=u"""<html>
<body>
<h1>Hello, Parsel!</h1>
<ul>
<li><a href="http://example.com">Link 1</a></li>
<li><a href="http://scrapy.org">Link 2</a></li>
</ul
</body>
</html>""")
>>>
>>> sel.css('h1::text').extract_first()
u'Hello, Parsel!'
>>>
>>> sel.css('h1::text').re('\w+')
[u'Hello', u'Parsel']
>>>
>>> for e in sel.css('ul > li'):
print(e.xpath('.//a/@href').extract_first())
http://example.com
http://scrapy.org
History
-------
1.0.3 (2016-07-29)
~~~~~~~~~~~~~~~~~~
* Add BSD-3-Clause license file
* Re-enable PyPy tests
* Integrate py.test runs with setuptools (needed for Debian packaging)
* Changelog is now called ``NEWS``
1.0.2 (2016-04-26)
~~~~~~~~~~~~~~~~~~
* Fix bug in exception handling causing original traceback to be lost
* Added docstrings and other doc fixes
1.0.1 (2015-08-24)
~~~~~~~~~~~~~~~~~~
* Updated PyPI classifiers
* Added docstrings for csstranslator module and other doc fixes
1.0.0 (2015-08-22)
~~~~~~~~~~~~~~~~~~
* Documentation fixes
0.9.6 (2015-08-14)
~~~~~~~~~~~~~~~~~~
* Updated documentation
* Extended test coverage
0.9.5 (2015-08-11)
~~~~~~~~~~~~~~~~~~
* Support for extending SelectorList
0.9.4 (2015-08-10)
~~~~~~~~~~~~~~~~~~
* Try workaround for travis-ci/dpl#253
0.9.3 (2015-08-07)
~~~~~~~~~~~~~~~~~~
* Add base_url argument
0.9.2 (2015-08-07)
~~~~~~~~~~~~~~~~~~
* Rename module unified -> selector and promoted root attribute
* Add create_root_node function
0.9.1 (2015-08-04)
~~~~~~~~~~~~~~~~~~
* Setup Sphinx build and docs structure
* Build universal wheels
* Rename some leftovers from package extraction
0.9.0 (2015-07-30)
~~~~~~~~~~~~~~~~~~
* First release on PyPI.
Keywords: parsel
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: BSD License
Classifier: Natural Language :: English
Classifier: Topic :: Text Processing :: Markup
Classifier: Topic :: Text Processing :: Markup :: HTML
Classifier: Topic :: Text Processing :: Markup :: XML
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
|