import wikipedia print(wikipedia.suggest("Bill cliton")) # bill clinton # 输入了错误的" Bill clinton",并返回了正确的建议" bill clinton"
提取维基百科文章摘要
使用summary()方法提取Wikipedia文章的摘要。
1 2 3 4
print(wikipedia.summary("Ubuntu")) # 提取" Ubuntu"的摘要 """ Ubuntu ( (listen)) is a free and open-source Linux distribution based on Debian. Ubuntu is officially released in three editions: Desktop, Server, and Core (for the internet of things devices and robots). Ubuntu is a popular operating system for cloud computing, with support for OpenStack.Ubuntu is released every six months, with long-term support (LTS) releases every two years. The latest release is 19.04 ("Disco Dingo"), and the most recent long-term support release is 18.04 LTS ("Bionic Beaver"), which is supported until 2028. Ubuntu is developed by Canonical and the community under a meritocratic governance model. Canonical provides security updates and support for each Ubuntu release, starting from the release date and until the release reaches its designated end-of-life (EOL) date. Canonical generates revenue through the sale of premium services related to Ubuntu. Ubuntu is named after the African philosophy of Ubuntu, which Canonical translates as"humanity to others" or"I am what I am because of who we all are". """
通过配置方法的sentences参数,我们可以自定义要显示的摘要文本中的句子数。
1 2
print(wikipedia.summary("Ubuntu", sentences=2)) # Ubuntu ( (listen)) is a free and open-source Linux distribution based on Debian. Ubuntu is officially released in three editions: Desktop, Server, and Core (for the internet of things devices and robots).
print(wikipedia.geosearch(37.787, -122.4)) """ ['140 New Montgomery', 'New Montgomery Street', 'Cartoon Art Museum', 'San Francisco Bay Area Planning and Urban Research Association', 'Academy of Art University', 'The Montgomery (San Francisco)', 'California Historical Society', 'Palace Hotel Residential Tower', 'St. Regis Museum Tower', 'Museum of the African Diaspora'] """
同样,我们可以设置page()的坐标属性,并获取与地理位置有关的文章。
1 2 3 4
print(wikipedia.page(37.787, -122.4)) """ ['140 New Montgomery', 'New Montgomery Street', 'Cartoon Art Museum', 'San Francisco Bay Area Planning and Urban Research Association', 'Academy of Art University', 'The Montgomery (San Francisco)', 'California Historical Society', 'Palace Hotel Residential Tower', 'St. Regis Museum Tower', 'Museum of the African Diaspora'] """
wikipedia.set_lang("de") print(wikipedia.summary("ubuntu", sentences=2)) # 以德语获取" Ubuntu" Wiki页面的摘要文本的前两个句子。 """ Ubuntu (auch Ubuntu Linux) ist eine Linux-Distribution, die auf Debian basiert. Der [Name](http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+name) Ubuntu bedeutet auf Zulu etwa ?Menschlichkeit" und bezeichnet eine afrikanische Philosophie. """
print(wikipedia.page("Ubuntu").html()) """ <div class="mw-parser-output"><div role="note" class="hatnote navigation-not-searchable">For the African philosophy, see <a href="/wiki/Ubuntu_philosophy" title="Ubuntu philosophy">Ubuntu philosophy</a>. For other uses, see <a href="/wiki/Ubuntu_(disambiguation)" class="mw-disambig" title="Ubuntu (disambiguation)">Ubuntu (disambiguation)</a>.</div> <div class="shortdescription nomobile noexcerpt noprint searchaux" style="display:none">Linux distribution based on Debian</div> ... """
title = "china" wiki = wikipediaapi.Wikipedia( language='en', extract_format=wikipediaapi.ExtractFormat.WIKI ) page = wiki.page(title) language = "zh" lpage = page.langlinks[language] # fr es ... print(lpage.text)
Installation
This package requires at least Python 3.4 to install because it’s using IntEnum.
1
pip install wikipedia-api
Usage
Goal of Wikipedia-API is to provide simple and easy to use API for retrieving informations from Wikipedia. Bellow are examples of common use cases.
Importing
1
import wikipediaapi
How To Get Single Page
Getting single page is straightforward. You have to initialize Wikipedia object and ask for page by its name. It’s parameter language has be one of supported languages.
To get full text of Wikipedia page you should use property text which constructs text of the page as concatanation of summary and sections with their titles and texts.
To get all top level sections of page, you have to use property sections. It returns list of WikipediaPageSection, so you have to use recursion to get all subsections.
1 2 3 4 5 6 7 8 9 10 11 12 13
def print_sections(sections, level=0): for s in sections: print("%s: %s - %s" % ("*" * (level + 1), s.title, s.text[0:40])) print_sections(s.sections, level + 1)
print_sections(page_py.sections) # *: History - Python was conceived in the late 1980s, # *: Features and philosophy - Python is a multi-paradigm programming l # *: Syntax and semantics - Python is meant to be an easily readable # **: Indentation - Python uses whitespace indentation, rath # **: Statements and control flow - Python's statements include (among other # **: Expressions - Some Python expressions are similar to l
How To Get Page In Other Languages
If you want to get other translations of given page, you should use property langlinks. It is map, where key is language code and value is WikipediaPage.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
def print_langlinks(page): langlinks = page.langlinks for k in sorted(langlinks.keys()): v = langlinks[k] print("%s: %s - %s: %s" % (k, v.language, v.title, v.fullurl))
print_langlinks(page_py) # af: af - Python (programmeertaal): https://af.wikipedia.org/wiki/Python_(programmeertaal) # als: als - Python (Programmiersprache): https://als.wikipedia.org/wiki/Python_(Programmiersprache) # an: an - Python: https://an.wikipedia.org/wiki/Python # ar: ar - بايثون: https://ar.wikipedia.org/wiki/%D8%A8%D8%A7%D9%8A%D8%AB%D9%88%D9%86 # as: as - পাইথন: https://as.wikipedia.org/wiki/%E0%A6%AA%E0%A6%BE%E0%A6%87%E0%A6%A5%E0%A6%A8
page_py_cs = page_py.langlinks['cs'] print("Page - Summary: %s" % page_py_cs.summary[0:60]) # Page - Summary: Python (anglická výslovnost [ˈpaiθtən]) je vysokoúrovňový sk
How To Get Links To Other Pages
If you want to get all links to other wiki pages from given page, you need to use property links. It’s map, where key is page title and value is WikipediaPage.
If you want to get all categories under which page belongs, you should use property categories. It’s map, where key is category title and value is WikipediaPage.
1 2 3 4 5 6 7 8 9 10 11 12 13
defprint_categories(page): categories = page.categories for title insorted(categories.keys()): print("%s: %s" % (title, categories[title]))
print("Categories") print_categories(page_py) # Category:All articles containing potentially dated statements: ... # Category:All articles with unsourced statements: ... # Category:Articles containing potentially dated statements from August 2016: ... # Category:Articles containing potentially dated statements from March 2017: ... # Category:Articles containing potentially dated statements from September 2017: ...
How To Get All Pages From Category
To get all pages from given category, you should use property categorymembers. It returns all members of given category. You have to implement recursion and deduplication by yourself.
If you have problems with retrieving data you can get URL of undrerlying API call. This will help you determine if the problem is in the library or somewhere else.
# Set handler if you use Python in interactive mode out_hdlr = wikipediaapi.logging.StreamHandler(sys.stderr) out_hdlr.setFormatter(wikipediaapi.logging.Formatter('%(asctime)s %(message)s')) out_hdlr.setLevel(wikipediaapi.logging.DEBUG) wikipediaapi.log.addHandler(out_hdlr)