个人工具

Index/python/RegularExpressionHowTo

来自Ubuntu中文

跳转至: 导航, 搜索

Regular Expression HOWTO(正则表达式操作指南)


本文出处:http://www.amk.ca/python/howto/regex/

本文作者:A.M. Kuchling ([email protected]

本文版本:0.05

翻译人员:FireHare

校对人员:Leal

适用版本:Python 1.5 及后续版本

文章状态:校对阶段


Abstract(摘要)

This document is an introductory tutorial to using regular expressions in Python with the re module. It provides a gentler introduction than the corresponding section in the Library Reference.
本文是通过Python的 re 模块来使用正则表达式的一个入门教程,和库参考手册的对应章节相比,更为浅显易懂、循序渐进。

This document is available from http://www.amk.ca/python/howto .
本文可以从 http://www.amk.ca/python/howto 捕获


Contents(目录)


Introduction(简介)

The re module was added in Python 1.5, and provides Perl-style regular expression patterns. Earlier versions of Python came with the regex module, which provides Emacs-style patterns. Emacs-style patterns are slightly less readable and don't provide as many features, so there's not much reason to use the regex module when writing new code, though you might encounter old code that uses it.
Python 自1.5版本起增加了re 模块,它提供 Perl 风格的正则表达式模式。Python 1.5之前版本则是通过 regex 模块提供 Emecs 风格的模式。Emacs 风格模式可读性稍差些,而且功能也不强,因此编写新代码时尽量不要再使用 regex 模块,当然偶尔你还是可能在老代码里发现其踪影。

Regular expressions (or REs) are essentially a tiny, highly specialized programming language embedded inside Python and made available through the re module. Using this little language, you specify the rules for the set of possible strings that you want to match; this set might contain English sentences, or e-mail addresses, or TeX commands, or anything you like. You can then ask questions such as "Does this string match the pattern?", or "Is there a match for the pattern anywhere in this string?". You can also use REs to modify a string or to split it apart in various ways.
就其本质而言,正则表达式(或 RE)是一种小型的、高度专业化的编程语言,(在Python中)它内嵌在Python中,并通过 re 模块实现。使用这个小型语言,你可以为想要匹配的相应字符串集指定规则;该字符串集可能包含英文语句、e-mail地址、TeX命令或任何你想搞定的东西。然后你可以问诸如“这个字符串匹配该模式吗?”或“在这个字符串中是否有部分匹配该模式呢?”。你也可以使用 RE 以各种方式来修改或分割字符串。

Regular expression patterns are compiled into a series of bytecodes which are then executed by a matching engine written in C. For advanced use, it may be necessary to pay careful attention to how the engine will execute a given RE, and write the RE in a certain way in order to produce bytecode that runs faster. Optimization isn't covered in this document, because it requires that you have a good understanding of the matching engine's internals.
正则表达式模式被编译成一系列的字节码,然后由用 C 编写的匹配引擎执行。在高级用法中,也许还要仔细留意引擎是如何执行给定 RE ,如何以特定方式编写 RE 以令生产的字节码运行速度更快。本文并不涉及优化,因为那要求你已充分掌握了匹配引擎的内部机制。

The regular expression language is relatively small and restricted, so not all possible string processing tasks can be done using regular expressions. There are also tasks that can be done with regular expressions, but the expressions turn out to be very complicated. In these cases, you may be better off writing Python code to do the processing; while Python code will be slower than an elaborate regular expression, it will also probably be more understandable.
正则表达式语言相对小型和受限(功能有限),因此并非所有字符串处理都能用正则表达式完成。当然也有些任务可以用正则表达式完成,不过最终表达式会变得异常复杂。碰到这些情形时,编写 Python 代码进行处理可能反而更好;尽管 Python 代码比一个精巧的正则表达式要慢些,但它更易理解。

Simple Patterns(简单模式)

We'll start by learning about the simplest possible regular expressions. Since regular expressions are used to operate on strings, we'll begin with the most common task: matching characters.
我们将从最简单的正则表达式学习开始。由于正则表达式常用于字符串操作,那我们就从最常见的任务:字符匹配 下手。

For a detailed explanation of the computer science underlying regular expressions (deterministic and non-deterministic finite automata), you can refer to almost any textbook on writing compilers.
有关正则表达式底层的计算机科学上的详细解释(确定性和非确定性有限自动机),你可以查阅编写编译器相关的任何教科书。

Matching Characters(字符匹配)

Most letters and characters will simply match themselves. For example, the regular expression test will match the string "test" exactly. (You can enable a case-insensitive mode that would let this RE match "Test" or "TEST" as well; more about this later.)
大多数字母和字符一般都会和自身匹配。例如,正则表达式 test 会和字符串“test”完全匹配。(你也可以使用大小写不敏感模式,它还能让这个 RE 匹配“Test”或“TEST”;稍后会有更多解释。)

There are exceptions to this rule; some characters are special, and don't match themselves. Instead, they signal that some out-of-the-ordinary thing should be matched, or they affect other portions of the RE by repeating them. Much of this document is devoted to discussing various metacharacters and what they do.
这个规则当然会有例外;有些字符比较特殊,它们和自身并不匹配,而是会表明应和一些特殊的东西匹配,或者它们会影响到 RE 其它部分的重复次数。本文很大篇幅专门讨论了各种元字符及其作用。

Here's a complete list of the metacharacters; their meanings will be discussed in the rest of this HOWTO.
这里有一个元字符的完整列表;其含义会在本指南余下部分进行讨论。

. ^ $ * + ? { [ ] \ | ( )

The first metacharacters we'll look at are `" and "`. They're used for specifying a character class, which is a set of characters that you wish to match. Characters can be listed individually, or a range of characters can be indicated by giving two characters and separating them by a "-". For example, [abc] will match any of the characters "a", "b", or "c"; this is the same as [a-c], which uses a range to express the same set of characters. If you wanted to match only lowercase letters, your RE would be [a-z].
我们首先考察的元字符是 `""`。它们常用来指定一个字符类别,所谓字符类别就是你想匹配的一个字符集。字符可以单个列出,也可以用“-”号分隔的两个给定字符来表示一个字符区间。例如,[abc] 将匹配"a", "b", 或 "c"中的任意一个字符;也可以用区间[a-c]来表示同一字符集,和前者效果一致。如果你只想匹配小写字母,那么 RE 应写成 [a-z]。

Metacharacters are not active inside classes. For example, [akm$] will match any of the characters "a", "k", "m", or "$"; "$" is usually a metacharacter, but inside a character class it's stripped of its special nature.
元字符在类别里并不起作用。例如,[akm$]将匹配字符"a", "k", "m", 或 "$" 中的任意一个;"$"通常用作元字符,但在字符类别里,其特性被除去,恢复成普通字符。

You can match the characters not within a range by complementing the set. This is indicated by including a "" as the first character of the class; `"^"` elsewhere will simply match the `"^"` character. For example, [5] will match any character except "5".
你可以用补集来匹配不在区间范围内的字符。其做法是把`""`作为类别的首个字符;其它地方的`"^"`只会简单匹配 `"^"` 字符本身。例如,[5] 将匹配除 "5" 之外的任意字符。

Perhaps the most important metacharacter is the backslash, "\". As in Python string literals, the backslash can be followed by various characters to signal various special sequences. It's also used to escape all the metacharacters so you can still match them in patterns; for example, if you need to match a "[" or "\", you can precede them with a backslash to remove their special meaning: \[ or \\.
也许最重要的元字符是反斜杠"\"。 做为 Python 中的字符串字母,反斜杠后面可以加不同的字符以表示不同特殊意义。它也可以用于取消所有的元字符,这样你就可以在模式中匹配它们了。举个例子,如果你需要匹配字符 "[" 或 "\",你可以在它们之前用反斜杠来取消它们的特殊意义: \[ 或 \\。

Some of the special sequences beginning with "\" represent predefined sets of characters that are often useful, such as the set of digits, the set of letters, or the set of anything that isn't whitespace. The following predefined special sequences are available:
一些用 "\" 开始的特殊字符所表示的预定义字符集通常是很有用的,象数字集,字母集,或其它非空字符集。下列是可用的预设特殊字符:

\d Matches any decimal digit; this is equivalent to the class [0-9].
匹配任何十进制数;它相当于类 [0-9]。

\D Matches any non-digit character; this is equivalent to the class [^0-9].
匹配任何非数字字符;它相当于类 [^0-9]。

\s Matches any whitespace character; this is equivalent to the class [ \t\n\r\f\v].
匹配任何空白字符;它相当于类 [ \t\n\r\f\v]。

\S Matches any non-whitespace character; this is equivalent to the class [^ \t\n\r\f\v].
匹配任何非空白字符;它相当于类 [^ \t\n\r\f\v]。

\w Matches any alphanumeric character; this is equivalent to the class [a-zA-Z0-9_].
匹配任何字母数字字符;它相当于类 [a-zA-Z0-9_]。

\W Matches any non-alphanumeric character; this is equivalent to the class [^a-zA-Z0-9_].
匹配任何非字母数字字符;它相当于类 [^a-zA-Z0-9_]。

These sequences can be included inside a character class. For example, [\s,.] is a character class that will match any whitespace character, or "," or ".".
这样特殊字符都可以包含在一个字符类中。如,[\s,.]字符类将匹配任何空白字符或","或"."。

The final metacharacter in this section is .. It matches anything except a newline character, and there's an alternate mode (re.DOTALL) where it will match even a newline. "." is often used where you want to match ``any character.
本节最后一个元字符是 . 。它匹配除了换行字符外的任何字符,在 alternate 模式(re.DOTALL)下它甚至可以匹配换行。"." 通常被用于你想匹配“任何字符”的地方。

Repeating Things(重复)

Being able to match varying sets of characters is the first thing regular expressions can do that isn't already possible with the methods available on strings. However, if that was the only additional capability of regexes, they wouldn't be much of an advance. Another capability is that you can specify that portions of the RE must be repeated a certain number of times.
正则表达式第一件能做的事是能够匹配不定长的字符集,而这是其它能作用在字符串上的方法所不能做到的。 不过,如果那是正则表达式唯一的附加功能的话,那么它们也就不那么优秀了。它们的另一个功能就是你可以指定正则表达式的一部分的重复次数。

The first metacharacter for repeating things that we'll look at is *. * doesn't match the literal character "*"; instead, it specifies that the previous character can be matched zero or more times, instead of exactly once.
我们讨论的第一个重复功能的元字符是 *。* 并不匹配字母字符 "*";相反,它指定前一个字符可以被匹配零次或更多次,而不是只有一次。

For example, ca*t will match "ct" (0 "a"characters), "cat" (1 "a"), "caaat" (3 "a"characters), and so forth. The RE engine has various internal limitations stemming from the size of C's int type, that will prevent it from matching over 2 billion "a" characters; you probably don't have enough memory to construct a string that large, so you shouldn't run into that limit.
举个例子,ca*t 将匹配 "ct" (0 个 "a" 字符), "cat" (1 个 "a"), "caaat" (3 个 "a" 字符)等等。RE 引擎有各种来自 C 的整数类型大小的内部限制,以防止它匹配超过2亿个 "a" 字符;你也许没有足够的内存去建造那么大的字符串,所以将不会累计到那个限制。

Repetitions such as * are greedy; when repeating a RE, the matching engine will try to repeat it as many times as possible. If later portions of the pattern don't match, the matching engine will then back up and try again with few repetitions.
象 * 这样地重复是“贪婪的”;当重复一个 RE 时,匹配引擎会试着重复尽可能多的次数。如果模式的后面部分没有被匹配,匹配引擎将退回并再次尝试更小的重复。

A step-by-step example will make this more obvious. Let's consider the expression a[bcd]*b. This matches the letter "a", zero or more letters from the class [bcd], and finally ends with a "b". Now imagine matching this RE against the string "abcbd".
一步步的示例可以使它更加清晰。让我们考虑表达式 a[bcd]*b。它匹配字母 "a",零个或更多个来自类 [bcd]中的字母,最后以 "b" 结尾。现在想一想该 RE 对字符串 "abcbd" 的匹配。

Step Matched Explanation
1 a The a in the RE matches.
a 匹配模式
2 abcbd The engine matches [bcd]*, going as far as it can, which is to the end of the string.
引擎匹配 [bcd]*,并尽其所能匹配到字符串的结尾
3 Failure The engine tries to match b, but the current position is at the end of the string, so it fails.
引擎尝试匹配 b,但当前位置已经是字符的最后了,所以失败
4 abcb Back up, so that [bcd]* matches one less character.
退回,[bcd]*尝试少匹配一个字符。
5 Failure Try b again, but the current position is at the last character, which is a "d".
再次尝次b,但在当前最后一位字符是"d"。
6 abc Back up again, so that [bcd]* is only matching "bc".
再次退回,[bcd]*只匹配 "bc"。
7 abcb Try b again. This time but the character at the current position is "b", so it succeeds.
再次尝试 b ,这次当前位上的字符正好是 "b"

The end of the RE has now been reached, and it has matched "abcb". This demonstrates how the matching engine goes as far as it can at first, and if no match is found it will then progressively back up and retry the rest of the RE again and again. It will back up until it has tried zero matches for [bcd]*, and if that subsequently fails, the engine will conclude that the string doesn't match the RE at all.
RE 的结尾部分现在可以到达了,它匹配 "abcb"。这证明了匹配引擎一开始会尽其所能进行匹配,如果没有匹配然后就逐步退回并反复尝试 RE 剩下来的部分。直到它退回尝试匹配 [bcd] 到零次为止,如果随后还是失败,那么引擎就会认为该字符串根本无法匹配 RE 。

Another repeating metacharacter is +, which matches one or more times. Pay careful attention to the difference between * and +; * matches zero or more times, so whatever's being repeated may not be present at all, while + requires at least one occurrence. To use a similar example, ca+t will match "cat" (1 "a"), "caaat" (3 "a"'s), but won't match "ct".
另一个重复元字符是 +,表示匹配一或更多次。请注意 * 和 + 之间的不同;* 匹配零或更多次,所以根本就可以不出现,而 + 则要求至少出现一次。用同一个例子,ca+t 就可以匹配 "cat" (1 个 "a"), "caaat" (3 个 "a"), 但不能匹配 "ct"。

There are two more repeating qualifiers. The question mark character, ?, matches either once or zero times; you can think of it as marking something as being optional. For example, home-?brew matches either "homebrew" or "home-brew".
还有更多的限定符。问号 ? 匹配一次或零次;你可以认为它用于标识某事物是可选的。例如:home-?brew 匹配 "homebrew" 或 "home-brew"。

The most complicated repeated qualifier is {m,n}, where m and n are decimal integers. This qualifier means there must be at least m repetitions, and at most n. For example, a/{1,3}b will match "a/b", "a//b", and "a///b". It won't match "ab", which has no slashes, or "a////b", which has four.
最复杂的重复限定符是 {m,n},其中 m 和 n 是十进制整数。该限定符的意思是至少有 m 个重复,至多到 n 个重复。举个例子,a/{1,3}b 将匹配 "a/b","a//b" 和 "a///b"。它不能匹配 "ab" 因为没有斜杠,也不能匹配 "a////b" ,因为有四个。

You can omit either m or n; in that case, a reasonable value is assumed for the missing value. Omitting m is interpreted as a lower limit of 0, while omitting n results in an upper bound of infinity -- actually, the 2 billion limit mentioned earlier, but that might as well be infinity.
你可以忽略 m 或 n;因为会为缺失的值假设一个合理的值。忽略 m 会认为下边界是 0,而忽略 n 的结果将是上边界为无穷大 -- 实际上是先前我们提到的 2 兆,但这也许同无穷大一样。

Readers of a reductionist bent may notice that the three other qualifiers can all be expressed using this notation. {0,} is the same as *, {1,} is equivalent to +, and {0,1} is the same as ?. It's better to use *, +, or ? when you can, simply because they're shorter and easier to read.
细心的读者也许注意到其他三个限定符都可以用这样方式来表示。 {0,} 等同于 *,{1,} 等同于 +,而{0,1}则与 ? 相同。如果可以的话,最好使用 *,+,或?。很简单因为它们更短也再容易懂。

Using Regular Expressions(使用正则表达式)

Now that we've looked at some simple regular expressions, how do we actually use them in Python? The re module provides an interface to the regular expression engine, allowing you to compile REs into objects and then perform matches with them.
现在我们已经看了一些简单的正则表达式,那么我们实际在 Python 中是如何使用它们的呢? re 模块提供了一个正则表达式引擎的接口,可以让你将 REs 编译成对象并用它们来进行匹配。

Compiling Regular Expressions(编译正则表达式)

Regular expressions are compiled into `RegexObject` instances, which have methods for various operations such as searching for pattern matches or performing string substitutions.
正则表达式被编译成 `RegexObject` 实例,可以为不同的操作提供方法,如模式匹配搜索或字符串替换。

#!python
>>> import re
>>> p = re.compile('ab*')
>>> print p
<re.RegexObject instance at 80b4150>

re.compile() also accepts an optional flags argument, used to enable various special features and syntax variations. We'll go over the available settings later, but for now a single example will do:
re.compile() 也接受可选的标志参数,常用来实现不同的特殊功能和语法变更。我们稍后将查看所有可用的设置,但现在只举一个例子:

#!python
>>> p = re.compile('ab*', re.IGNORECASE)

The RE is passed to re.compile() as a string. REs are handled as strings because regular expressions aren't part of the core Python language, and no special syntax was created for expressing them. (There are applications that don't need REs at all, so there's no need to bloat the language specification by including them.) Instead, the re module is simply a C extension module included with Python, just like the socket or zlib module.
RE 被做为一个字符串发送给 re.compile()。REs 被处理成字符串是因为正则表达式不是 Python 语言的核心部分,也没有为它创建特定的语法。(应用程序根本就不需要 REs,因此没必要包含它们去使语言说明变得臃肿不堪。)而 re 模块则只是以一个 C 扩展模块的形式来被 Python 包含,就象 socket 或 zlib 模块一样。

Putting REs in strings keeps the Python language simpler, but has one disadvantage which is the topic of the next section.
将 REs 作为字符串以保证 Python 语言的简洁,但这样带来的一个麻烦就是象下节标题所讲的。

The Backslash Plague(反斜杠的麻烦)

As stated earlier, regular expressions use the backslash character ("\") to indicate special forms or to allow special characters to be used without invoking their special meaning. This conflicts with Python's usage of the same character for the same purpose in string literals.
在早期规定中,正则表达式用反斜杠字符 ("\") 来表示特殊格式或允许使用特殊字符而不调用它的特殊用法。这就与 Python 在字符串中的那些起相同作用的相同字符产生了冲突。

Let's say you want to write a RE that matches the string "\section", which might be found in a LATEX file. To figure out what to write in the program code, start with the desired string to be matched. Next, you must escape any backslashes and other metacharacters by preceding them with a backslash, resulting in the string "\\section". The resulting string that must be passed to re.compile() must be \\section. However, to express this as a Python string literal, both backslashes must be escaped again.
让我们举例说明,你想写一个 RE 以匹配字符串 "\section",可能是在一个 LATEX 文件查找。为了要在程序代码中判断,首先要写出想要匹配的字符串。接下来你需要在所有反斜杠和元字符前加反斜杠来取消其特殊意义。

Characters(字符) Stage(阶段)
\section Text string to be matched(要匹配的字符串)
\\section Escaped backslash for re.compile(为 re.compile 取消反斜杠的特殊意义)
"\\\\section" Escaped backslashes for a string literal(为字符串取消反斜杠)

In short, to match a literal backslash, one has to write '\\\\' as the RE string, because the regular expression must be "\\", and each backslash must be expressed as "\\" inside a regular Python string literal. In REs that feature backslashes repeatedly, this leads to lots of repeated backslashes and makes the resulting strings difficult to understand.
简单地说,为了匹配一个反斜杠,不得不在 RE 字符串中写 '\\\\',因为正则表达式中必须是 "\\",而每个反斜杠按 Python 字符串字母表示的常规必须表示成 "\\"。在 REs 中反斜杠的这个重复特性会导致大量重复的反斜杠,而且所生成的字符串也很难懂。

The solution is to use Python's raw string notation for regular expressions; backslashes are not handled in any special way in a string literal prefixed with "r", so r"\n" is a two-character string containing "\" and "n", while "\n" is a one-character string containing a newline. Frequently regular expressions will be expressed in Python code using this raw string notation.
解决的办法就是为正则表达式使用 Python 的 raw 字符串表示;在字符串前加个 "r" 反斜杠就不会被任何特殊方式处理,所以 r"\n" 就是包含"\" 和 "n" 的两个字符,而 "\n" 则是一个字符,表示一个换行。正则表达式通常在 Python 代码中都是用这种 raw 字符串表示。

Regular String(常规字符串) Raw string(Raw 字符串)
"ab*" r"ab*"
"\\\\section" r"\\section"
"\\w+\\s+\\1" r"\w+\s+\1"

Performing Matches(执行匹配)

Once you have an object representing a compiled regular expression, what do you do with it? `RegexObject` instances have several methods and attributes. Only the most significant ones will be covered here; consult the Library Reference for a complete listing.
一旦你有了已经编译了的正则表达式的对象,你要用它做什么呢?`RegexObject` 实例有一些方法和属性。这里只显示了最重要的几个,如果要看完整的列表请查阅 Library Refference。

Method/Attribute(方法/属性) Purpose(作用)
match() Determine if the RE matches at the beginning of the string.
决定 RE 是否在字符串刚开始的位置匹配
search() Scan through a string, looking for any location where this RE matches.
扫描字符串,找到这个 RE 匹配的位置
findall() Find all substrings where the RE matches, and returns them as a list.
找到 RE 匹配的所有子串,并把它们作为一个列表返回
finditer() Find all substrings where the RE matches, and returns them as an iterator.
找到 RE 匹配的所有子串,并把它们作为一个迭代器返回

match() and search() return None if no match can be found. If they're successful, a `MatchObject` instance is returned, containing information about the match: where it starts and ends, the substring it matched, and more.
如果没有匹配到的话,match() 和 search() 将返回 None。如果成功的话,就会返回一个 `MatchObject` 实例,其中有这次匹配的信息:它是从哪里开始和结束,它所匹配的子串等等。

You can learn about this by interactively experimenting with the re module. If you have Tkinter available, you may also want to look at Tools/scripts/redemo.py, a demonstration program included with the Python distribution. It allows you to enter REs and strings, and displays whether the RE matches or fails. redemo.py can be quite useful when trying to debug a complicated RE. Phil Schwartz's Kodos is also an interactive tool for developing and testing RE patterns. This HOWTO will use the standard Python interpreter for its examples.
你可以用采用人机对话并用 re 模块实验的方式来学习它。如果你有 Tkinter 的话,你也许可以考虑参考一下 Tools/scripts/redemo.py,一个包含在 Python 发行版里的示范程序。

First, run the Python interpreter, import the re module, and compile a RE:
首先,运行 Python 解释器,导入 re 模块并编译一个 RE:

#!python
Python 2.2.2 (#1, Feb 10 2003, 12:57:01)
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
<_sre.SRE_Pattern object at 80c3c28>

Now, you can try matching various strings against the RE [a-z]+. An empty string shouldn't match at all, since + means 'one or more repetitions'. match() should return None in this case, which will cause the interpreter to print no output. You can explicitly print the result of match() to make this clear.
现在,你可以试着用 RE 的 [a-z]+ 去匹配不同的字符串。一个空字符串将根本不能匹配,因为 + 的意思是 “一个或更多的重复次数”。 在这种情况下 match() 将返回 None,因为它使解释器没有输出。你可以明确地打印出 match() 的结果来弄清这一点。

#!python
>>> p.match("")
>>> print p.match("")
None

Now, let's try it on a string that it should match, such as "tempo". In this case, match() will return a MatchObject, so you should store the result in a variable for later use.
现在,让我们试着用它来匹配一个字符串,如 "tempo"。这时,match() 将返回一个 MatchObject。因此你可以将结果保存在变量里以便后面使用。

#!python
>>> m = p.match( 'tempo')
>>> print m
<_sre.SRE_Match object at 80c4f68>

Now you can query the `MatchObject` for information about the matching string. `MatchObject` instances also have several methods and attributes; the most important ones are:
现在你可以查询 `MatchObject` 关于匹配字符串的相关信息了。MatchObject 实例也有几个方法和属性;最重要的那些如下所示:

Method/Attribute(方法/属性) Purpose(作用)
group() Return the string matched by the RE
返回被 RE 匹配的字符串
start() Return the starting position of the match
返回匹配开始的位置
end() Return the ending position of the match
返回匹配结束的位置
span() Return a tuple containing the (start, end) positions of the match
返回一个元组包含匹配 (开始,结束) 的位置

Trying these methods will soon clarify their meaning:
试试这些方法不久就会清楚它们的作用了:

#!python
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)

group() returns the substring that was matched by the RE. start() and end() return the starting and ending index of the match. span() returns both start and end indexes in a single tuple. Since the match method only checks if the RE matches at the start of a string, start() will always be zero. However, the search method of `RegexObject` instances scans through the string, so the match may not start at zero in that case.
group() 返回 RE 匹配的子串。start() 和 end() 返回匹配开始和结束时的索引。span() 则用单个元组把开始和结束时的索引一起返回。因为匹配方法检查到如果 RE 在字符串开始处开始匹配,那么 start() 将总是为零。然而, `RegexObject` 实例的 search 方法扫描下面的字符串的话,在这种情况下,匹配开始的位置就也许不是零了。

#!python
>>> print p.match('::: message')
None
>>> m = p.search('::: message') ; print m
<re.MatchObject instance at 80c9650>
>>> m.group()
'message'
>>> m.span()
(4, 11)

In actual programs, the most common style is to store the `MatchObject` in a variable, and then check if it was None. This usually looks like:
在实际程序中,最常见的作法是将 `MatchObject` 保存在一个变量里,然后检查它是否为 None,通常如下所示:

#!python
p = re.compile( ... )
m = p.match( 'string goes here' )
if m:
print 'Match found: ', m.group()
else:
print 'No match'

Two `RegexObject` methods return all of the matches for a pattern. findall() returns a list of matching strings:
两个 `RegexObject` 方法返回所有匹配模式的子串。findall()返回一个匹配字符串列表:

#!python
>>> p = re.compile('\d+')
>>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping')
['12', '11', '10']

findall() has to create the entire list before it can be returned as the result. In Python 2.2, the finditer() method is also available, returning a sequence of `MatchObject` instances as an iterator.
findall() 在它返回结果时不得不创建一个列表。在 Python 2.2中,也可以用 finditer() 方法。

#!python
>>> iterator = p.finditer('12 drummers drumming, 11 ... 10 ...')
>>> iterator
<callable-iterator object at 0x401833ac>
>>> for match in iterator:
...     print match.span()
...
(0, 2)
(22, 24)
(29, 31)

Module-Level Functions(模块级函数)

You don't have to produce a `RegexObject` and call its methods; the re module also provides top-level functions called match(), search(), sub(), and so forth. These functions take the same arguments as the corresponding `RegexObject` method, with the RE string added as the first argument, and still return either None or a `MatchObject` instance.
你不一定要产生一个 `RegexObject` 对象然后再调用它的方法;re 模块也提供了顶级函数调用如 match()、search()、sub() 等等。这些函数使用 RE 字符串作为第一个参数,而后面的参数则与相应 `RegexObject` 的方法参数相同,返回则要么是 None 要么就是一个 `MatchObject` 的实例。

#!python
>>> print re.match(r'From\s+', 'Fromage amk')
None
>>> re.match(r'From\s+', 'From amk Thu May 14 19:12:10 1998')
<re.MatchObject instance at 80c5978>

Under the hood, these functions simply produce a `RegexObject` for you and call the appropriate method on it. They also store the compiled object in a cache, so future calls using the same RE are faster.
Under the hood, 这些函数简单地产生一个 RegexOject 并在其上调用相应的方法。它们也在缓存里保存编译后的对象,因此在将来调用用到相同 RE 时就会更快。

Should you use these module-level functions, or should you get the `RegexObject` and call its methods yourself? That choice depends on how frequently the RE will be used, and on your personal coding style. If a RE is being used at only one point in the code, then the module functions are probably more convenient. If a program contains a lot of regular expressions, or re-uses the same ones in several locations, then it might be worthwhile to collect all the definitions in one place, in a section of code that compiles all the REs ahead of time. To take an example from the standard library, here's an extract from xmllib.py: 你将使用这些模块级函数,还是先得到一个 `RegexObject` 再调用它的方法呢?如何选择依赖于怎样用 RE 更有效率以及你个人编码风格。如果一个 RE 在代码中只做用一次的话,那么模块级函数也许更方便。如果程序包含很多的正则表达式,或在多处复用同一个的话,那么将全部定义放在一起,在一段代码中提前编译所有的 REs 更有用。从标准库中看一个例子,这是从 xmllib.py 文件中提取出来的:

#!python
ref = re.compile( ... )
entityref = re.compile( ... )
charref = re.compile( ... )
starttagopen = re.compile( ... )

I generally prefer to work with the compiled object, even for one-time uses, but few people will be as much of a purist about this as I am.
我通常更喜欢使用编译对象,甚至它只用一次,but few people will be as much of a purist about this as I am。

Compilation Flags(编译标志)

Compilation flags let you modify some aspects of how regular expressions work. Flags are available in the re module under two names, a long name such as IGNORECASE, and a short, one-letter form such as I. (If you're familiar with Perl's pattern modifiers, the one-letter forms use the same letters; the short form of re.VERBOSE is re.X, for example.) Multiple flags can be specified by bitwise OR-ing them; re.I | re.M sets both the I and M flags, for example.
编译标志让你可以修改正则表达式的一些运行方式。在 re 模块中标志可以使用两个名字,一个是全名如 IGNORECASE,一个是缩写,一字母形式如 I。(如果你熟悉 Perl 的模式修改,一字母形式使用同样的字母;例如 re.VERBOSE的缩写形式是 re.X。)多个标志可以通过按位 OR-ing 它们来指定。如 re.I | re.M 被设置成 I 和 M 标志:

Here's a table of the available flags, followed by a more detailed explanation of each one.
这有个可用标志表,对每个标志后面都有详细的说明。

Flag(标志) Meaning(含义)
DOTALL, S Make . match any character, including newlines
使 . 匹配包括换行在内的所有字符
IGNORECASE, I Do case-insensitive matches
使匹配对大小写不敏感
LOCALE, L Do a locale-aware match
做本地化识别(locale-aware)匹配
MULTILINE, M Multi-line matching, affecting and $
多行匹配,影响
和 $
VERBOSE, X Enable verbose REs, which can be organized more cleanly and understandably.
能够使用 REs 的 verbose 状态,使之被组织得更清晰易懂

I
IGNORECASE

Perform case-insensitive matching; character class and literal strings will match letters by ignoring case. For example, [A-Z] will match lowercase letters, too, and Spam will match "Spam", "spam", or "spAM". This lowercasing doesn't take the current locale into account; it will if you also set the LOCALE flag.
使匹配对大小写不敏感;字符类和字符串匹配字母时忽略大小写。举个例子,[A-Z]也可以匹配小写字母,Spam 可以匹配 "Spam", "spam", 或 "spAM"。这个小写字母并不考虑当前位置。

L
LOCALE

Make \w, \W, \b, and \B, dependent on the current locale.
影响 \w, \W, \b, 和 \B,这取决于当前的本地化设置。

Locales are a feature of the C library intended to help in writing programs that take account of language differences. For example, if you're processing French text, you'd want to be able to write \w+ to match words, but \w only matches the character class [A-Za-z]; it won't match "é" or "ç". If your system is configured properly and a French locale is selected, certain C functions will tell the program that "é" should also be considered a letter. Setting the LOCALE flag when compiling a regular expression will cause the resulting compiled object to use these C functions for \w; this is slower, but also enables \w+ to match French words as you'd expect.
locales 是 C 语言库中的一项功能,是用来为需要考虑不同语言的编程提供帮助的。举个例子,如果你正在处理法文文本,你想用 \w+ 来匹配文字,但 \w 只匹配字符类 [A-Za-z];它并不能匹配 "é" 或 "ç"。如果你的系统配置适当且本地化设置为法语,那么内部的 C 函数将告诉程序 "é" 也应该被认为是一个字母。当在编译正则表达式时使用 LOCALE 标志会得到用这些 C 函数来处理 \w 后的编译对象;这会更慢,但也会象你希望的那样可以用 \w+ 来匹配法文文本。

M
MULTILINE

(^ and $ haven't been explained yet; they'll be introduced in section 4.1.)
(此时 ^ 和 $ 不会被解释; 它们将在 4.1 节被介绍.)

Usually matches only at the beginning of the string, and $ matches only at the end of the string and immediately before the newline (if any) at the end of the string. When this flag is specified, matches at the beginning of the string and at the beginning of each line within the string, immediately following each newline. Similarly, the $ metacharacter matches either at the end of the string and at the end of each line (immediately preceding each newline).
使用 只匹配字符串的开始,而 $ 则只匹配字符串的结尾和直接在换行前(如果有的话)的字符串结尾。当本标志指定后, 匹配字符串的开始和字符串中每行的开始。同样的, $ 元字符匹配字符串结尾和字符串中每行的结尾(直接在每个换行之前)。

S
DOTALL

Makes the "." special character match any character at all, including a newline; without this flag, "." will match anything except a newline.
使 "." 特殊字符完全匹配任何字符,包括换行;没有这个标志, "." 匹配除了换行外的任何字符。

X
VERBOSE

This flag allows you to write regular expressions that are more readable by granting you more flexibility in how you can format them. When this flag has been specified, whitespace within the RE string is ignored, except when the whitespace is in a character class or preceded by an unescaped backslash; this lets you organize and indent the RE more clearly. It also enables you to put comments within a RE that will be ignored by the engine; comments are marked by a "#" that's neither in a character class or preceded by an unescaped backslash.
该标志通过给予你更灵活的格式以便你将正则表达式写得更易于理解。当该标志被指定时,在 RE 字符串中的空白符被忽略,除非该空白符在字符类中或在反斜杠之后;这可以让你更清晰地组织和缩进 RE。它也可以允许你将注释写入 RE,这些注释会被引擎忽略;注释用 "#"号 来标识,不过该符号不能在字符串或反斜杠之后。

For example, here's a RE that uses re.VERBOSE; see how much easier it is to read?
举个例子,这里有一个使用 re.VERBOSE 的 RE;看看读它轻松了多少?

#!python
charref = re.compile(r"""
&[[]]		     # Start of a numeric entity reference
(
[0-9]+[^0-9]      # Decimal form
| 0[0-7]+[^0-7]   # Octal form
| x[0-9a-fA-F]+[^0-9a-fA-F] # Hexadecimal form
)
""", re.VERBOSE)

Without the verbose setting, the RE would look like this:
没有 verbose 设置, RE 会看起来象这样:

#!python
charref = re.compile("&#([0-9]+[^0-9]"
"|0[0-7]+[^0-7]"
"|x[0-9a-fA-F]+[^0-9a-fA-F])")

In the above example, Python's automatic concatenation of string literals has been used to break up the RE into smaller pieces, but it's still more difficult to understand than the version using re.VERBOSE.
在上面的例子里,Python 的字符串自动连接可以用来将 RE 分成更小的部分,但它比用 re.VERBOSE 标志时更难懂。

More Pattern Power(更多模式功能)

So far we've only covered a part of the features of regular expressions. In this section, we'll cover some new metacharacters, and how to use groups to retrieve portions of the text that was matched.
到目前为止,我们只展示了正则表达式的一部分功能。在本节,我们将展示一些新的元字符和如何使用组来检索被匹配的文本部分。


More Metacharacters(更多的元字符)

There are some metacharacters that we haven't covered yet. Most of them will be covered in this section.
还有一些我们还没展示的元字符,其中的大部分将在本节展示。

Some of the remaining metacharacters to be discussed are zero-width assertions. They don't cause the engine to advance through the string; instead, they consume no characters at all, and simply succeed or fail. For example, \b is an assertion that the current position is located at a word boundary; the position isn't changed by the \b at all. This means that zero-width assertions should never be repeated, because if they match once at a given location, they can obviously be matched an infinite number of times.
剩下来要讨论的一部分元字符是零宽界定符(zero-width assertions)。它们并不会使引擎在处理字符串时更快;相反,它们根本就没有对应任何字符,只是简单的成功或失败。举个例子, \b 是一个在单词边界定位当前位置的界定符(assertions),这个位置根本就不会被 \b 改变。这意味着零宽界定符(zero-width assertions)将永远不会被重复,因为如果它们在给定位置匹配一次,那么它们很明显可以被匹配无数次。

|

Alternation, or the "or" operator. If A and B are regular expressions, A|B will match any string that matches either "A" or "B". | has very low precedence in order to make it work reasonably when you're alternating multi-character strings. Crow|Servo will match either "Crow" or "Servo", not "Cro", a "w" or an "S", and "ervo".
可选项,或者 "or" 操作符。如果 A 和 B 是正则表达式,A|B 将匹配任何匹配了 "A" 或 "B" 的字符串。| 的优先级非常低,是为了当你有多字符串要选择时能适当地运行。Crow|Servo 将匹配"Crow" 或 "Servo", 而不是 "Cro", 一个 "w" 或 一个 "S", 和 "ervo"。

To match a literal "|", use \|, or enclose it inside a character class, as in [|].
为了匹配字母 "|",可以用 \|,或将其包含在字符类中,如[|]。

^

Matches at the beginning of lines. Unless the MULTILINE flag has been set, this will only match at the beginning of the string. In MULTILINE mode, this also matches immediately after each newline within the string.
匹配行首。除非设置 MULTILINE 标志,它只是匹配字符串的开始。在 MULTILINE 模式里,它也可以直接匹配字符串中的每个换行。

For example, if you wish to match the word "From" only at the beginning of a line, the RE to use is ^From.
例如,如果你只希望匹配在行首单词 "From",那么 RE 将用 ^From。

#!python
>>> print re.search('^From', 'From Here to Eternity')
<re.MatchObject instance at 80c1520>
>>> print re.search('^From', 'Reciting From Memory')
None

$

Matches at the end of a line, which is defined as either the end of the string, or any location followed by a newline character.
匹配行尾,行尾被定义为要么是字符串尾,要么是一个换行字符后面的任何位置。

#!python
>>> print re.search('}$', '{block}')
<re.MatchObject instance at 80adfa8>
>>> print re.search('}$', '{block} ')
None
>>> print re.search('}$', '{block}\n')
<re.MatchObject instance at 80adfa8>

To match a literal "$", use \$ or enclose it inside a character class, as in [$].
匹配一个 "$",使用 \$ 或将其包含在字符类中,如[$]。

\A

Matches only at the start of the string. When not in MULTILINE mode, \A and are effectively the same. In MULTILINE mode, however, they're different; \A still matches only at the beginning of the string, but may match at any location inside the string that follows a newline character.
只匹配字符串首。当不在 MULTILINE 模式,\A 和 实际上是一样的。然而,在 MULTILINE 模式里它们是不同的;\A 只是匹配字符串首,而 还可以匹配在换行符之后字符串的任何位置。

\Z

Matches only at the end of the string.
只匹配字符串尾。

\b

Word boundary. This is a zero-width assertion that matches only at the beginning or end of a word. A word is defined as a sequence of alphanumeric characters, so the end of a word is indicated by whitespace or a non-alphanumeric character.
单词边界。这是个零宽界定符(zero-width assertions)只用以匹配单词的词首和词尾。单词被定义为一个字母数字序列,因此词尾就是用空白符或非字母数字符来标示的。

The following example matches "class" only when it's a complete word; it won't match when it's contained inside another word.
下面的例子只匹配 "class" 整个单词;而当它被包含在其他单词中时不匹配。

#!python
>>> p = re.compile(r'\bclass\b')
>>> print p.search('no class at all')
<re.MatchObject instance at 80c8f28>
>>> print p.search('the declassified algorithm')
None
>>> print p.search('one subclass is')
None

There are two subtleties you should remember when using this special sequence. First, this is the worst collision between Python's string literals and regular expression sequences. In Python's string literals, "\b" is the backspace character, ASCII value 8. If you're not using raw strings, then Python will convert the "\b" to a backspace, and your RE won't match as you expect it to. The following example looks the same as our previous RE, but omits the "r" in front of the RE string.
当用这个特殊序列时你应该记住这里有两个微妙之处。第一个是 Python 字符串和正则表达式之间最糟的冲突。在 Python 字符串里,"\b" 是反斜杠字符,ASCII值是8。如果你没有使用 raw 字符串时,那么 Python 将会把 "\b" 转换成一个回退符,你的 RE 将无法象你希望的那样匹配它了。下面的例子看起来和我们前面的 RE 一样,但在 RE 字符串前少了一个 "r" 。

#!python
>>> p = re.compile('\bclass\b')
>>> print p.search('no class at all')
None
>>> print p.search('\b' + 'class' + '\b')
<re.MatchObject instance at 80c3ee0>

Second, inside a character class, where there's no use for this assertion, \b represents the backspace character, for compatibility with Python's string literals.
第二个在字符类中,这个限定符(assertion)不起作用,\b 表示回退符,以便与 Python 字符串兼容。

\B

Another zero-width assertion, this is the opposite of \b, only matching when the current position is not at a word boundary.
另一个零宽界定符(zero-width assertions),它正好同 \b 相反,只在当前位置不在单词边界时匹配。

Grouping(分组)

Frequently you need to obtain more information than just whether the RE matched or not. Regular expressions are often used to dissect strings by writing a RE divided into several subgroups which match different components of interest. For example, an RFC-822 header line is divided into a header name and a value, separated by a ":". This can be handled by writing a regular expression which matches an entire header line, and has one group which matches the header name, and another group which matches the header's value.
你经常需要得到比 RE 是否匹配还要多的信息。正则表达式常常用来分析字符串,编写一个 RE 匹配感兴趣的部分并将其分成几个小组。举个例子,一个 RFC-822 的头部用 ":" 隔成一个头部名和一个值,这就可以通过编写一个正则表达式匹配整个头部,用一组匹配头部名,另一组匹配头部值的方式来处理。

Groups are marked by the "(", ")" metacharacters. "(" and ")" have much the same meaning as they do in mathematical expressions; they group together the expressions contained inside them. For example, you can repeat the contents of a group with a repeating qualifier, such as *, +, ?, or {m,n}. For example, (ab)* will match zero or more repetitions of "ab".
组是通过 "(" 和 ")" 元字符来标识的。 "(" 和 ")" 有很多在数学表达式中相同的意思;它们一起把在它们里面的表达式组成一组。举个例子,你可以用重复限制符,象 *, +, ?, 和 {m,n},来重复组里的内容,比如说(ab)* 将匹配零或更多个重复的 "ab"。

#!python
>>> p = re.compile('(ab)*')
>>> print p.match('ababababab').span()
(0, 10)

Groups indicated with "(", ")" also capture the starting and ending index of the text that they match; this can be retrieved by passing an argument to group(), start(), end(), and span(). Groups are numbered starting with 0. Group 0 is always present; it's the whole RE, so `MatchObject` methods all have group 0 as their default argument. Later we'll see how to express groups that don't capture the span of text that they match.
组用 "(" 和 ")" 来指定,并且得到它们匹配文本的开始和结尾索引;这就可以通过一个参数用 group()、start()、end() 和 span() 来进行检索。组是从 0 开始计数的。组 0 总是存在;它就是整个 RE,所以 `MatchObject` 的方法都把组 0 作为它们缺省的参数。稍后我们将看到怎样表达不能得到它们所匹配文本的 span。

#!python
>>> p = re.compile('(a)b')
>>> m = p.match('ab')
>>> m.group()
'ab'
>>> m.group(0)
'ab'

Subgroups are numbered from left to right, from 1 upward. Groups can be nested; to determine the number, just count the opening parenthesis characters, going from left to right.
小组是从左向右计数的,从1开始。组可以被嵌套。计数的数值可以能过从左到右计算打开的括号数来确定。

#!python
>>> p = re.compile('(a(b)c)d')
>>> m = p.match('abcd')
>>> m.group(0)
'abcd'
>>> m.group(1)
'abc'
>>> m.group(2)
'b'

group() can be passed multiple group numbers at a time, in which case it will return a tuple containing the corresponding values for those groups.
group() 可以一次输入多个组号,在这种情况下它将返回一个包含那些组所对应值的元组。

#!python
>>> m.group(2,1,2)
('b', 'abc', 'b')

The groups() method returns a tuple containing the strings for all the subgroups, from 1 up to however many there are.
The groups() 方法返回一个包含所有小组字符串的元组,从 1 到 所含的小组号。

#!python
>>> m.groups()
('abc', 'b')

Backreferences in a pattern allow you to specify that the contents of an earlier capturing group must also be found at the current location in the string. For example, \1 will succeed if the exact contents of group 1 can be found at the current position, and fails otherwise. Remember that Python's string literals also use a backslash followed by numbers to allow including arbitrary characters in a string, so be sure to use a raw string when incorporating backreferences in a RE.
模式中的逆向引用允许你指定先前捕获组的内容,该组也必须在字符串当前位置被找到。举个例子,如果组 1 的内容能够在当前位置找到的话,\1 就成功否则失败。记住 Python 字符串也是用反斜杠加数据来允许字符串中包含任意字符的,所以当在 RE 中使用逆向引用时确保使用 raw 字符串。

For example, the following RE detects doubled words in a string.
例如,下面的 RE 在一个字符串中找到成双的词。

#!python
>>> p = re.compile(r'(\b\w+)\s+\1')
>>> p.search('Paris in the the spring').group()
'the the'

Backreferences like this aren't often useful for just searching through a string -- there are few text formats which repeat data in this way -- but you'll soon find out that they're very useful when performing string substitutions.
象这样只是搜索一个字符串的逆向引用并不常见 -- 用这种方式重复数据的文本格式并不多见 -- 但你不久就可以发现它们用在字符串替换上非常有用。

Non-capturing and Named Groups(无捕获组和命名组)

Elaborate REs may use many groups, both to capture substrings of interest, and to group and structure the RE itself. In complex REs, it becomes difficult to keep track of the group numbers. There are two features which help with this problem. Both of them use a common syntax for regular expression extensions, so we'll look at that first.
精心设计的 REs 也许会用很多组,既可以捕获感兴趣的子串,又可以分组和结构化 RE 本身。在复杂的 REs 里,追踪组号变得困难。有两个功能可以对这个问题有所帮助。它们也都使用正则表达式扩展的通用语法,因此我们来看看第一个。

Perl 5 added several additional features to standard regular expressions, and the Python re module supports most of them. It would have been difficult to choose new single-keystroke metacharacters or new special sequences beginning with "\" to represent the new features without making Perl's regular expressions confusingly different from standard REs. If you chose "&" as a new metacharacter, for example, old expressions would be assuming that "&" was a regular character and wouldn't have escaped it by writing \& or [&].
Perl 5 对标准正则表达式增加了几个附加功能,Python 的 re 模块也支持其中的大部分。选择一个新的单按键元字符或一个以 "\" 开始的特殊序列来表示新的功能,而又不会使 Perl 正则表达式与标准正则表达式产生混乱是有难度的。如果你选择 "&" 做为新的元字符,举个例子,老的表达式认为 "&" 是一个正常的字符,而不会在使用 \& 或 [&] 时也不会转义。

The solution chosen by the Perl developers was to use (?...) as the extension syntax. "?" immediately after a parenthesis was a syntax error because the "?" would have nothing to repeat, so this didn't introduce any compatibility problems. The characters immediately after the "?" indicate what extension is being used, so (?=foo) is one thing (a positive lookahead assertion) and (?:foo) is something else (a non-capturing group containing the subexpression foo).
Perl 开发人员的解决方法是使用 (?...) 来做为扩展语法。"?" 在括号后面会直接导致一个语法错误,因为 "?" 没有任何字符可以重复,因此它不会产生任何兼容问题。紧随 "?" 之后的字符指出扩展的用途,因此 (?=foo)

Python adds an extension syntax to Perl's extension syntax. If the first character after the question mark is a "P", you know that it's an extension that's specific to Python. Currently there are two such extensions: (?P<name>...) defines a named group, and (?P=name) is a backreference to a named group. If future versions of Perl 5 add similar features using a different syntax, the re module will be changed to support the new syntax, while preserving the Python-specific syntax for compatibility's sake.
Python 新增了一个扩展语法到 Perl 扩展语法中。如果在问号后的第一个字符是 "P",你就可以知道它是针对 Python 的扩展。目前有两个这样的扩展: (?P<name>...) 定义一个命名组,(?P=name) 则是对命名组的逆向引用。如果 Perl 5 的未来版本使用不同的语法增加了相同的功能,那么 re 模块也将改变以支持新的语法,这是为了兼容性的目的而保持的 Python 专用语法。

Now that we've looked at the general extension syntax, we can return to the features that simplify working with groups in complex REs. Since groups are numbered from left to right and a complex expression may use many groups, it can become difficult to keep track of the correct numbering, and modifying such a complex RE is annoying. Insert a new group near the beginning, and you change the numbers of everything that follows it.
现在我们看一下普通的扩展语法,我们回过头来简化在复杂 REs 中使用组运行的特性。因为组是从左到右编号的,而且一个复杂的表达式也许会使用许多组,它可以使跟踪当前组号变得困难,而修改如此复杂的 RE 是十分麻烦的。在开始时插入一个新组,你可以改变它之后的每个组号。

First, sometimes you'll want to use a group to collect a part of a regular expression, but aren't interested in retrieving the group's contents. You can make this fact explicit by using a non-capturing group: (?:...), where you can put any other regular expression inside the parentheses.
首先,有时你想用一个组去收集正则表达式的一部分,但又对组的内容不感兴趣。你可以用一个无捕获组: (?:...) 来实现这项功能,这样你可以在括号中发送任何其他正则表达式。

#!python
>>> m = re.match("([abc])+", "abc")
>>> m.groups()
('c',)
>>> m = re.match("(?:[abc])+", "abc")
>>> m.groups()
()

Except for the fact that you can't retrieve the contents of what the group matched, a non-capturing group behaves exactly the same as a capturing group; you can put anything inside it, repeat it with a repetition metacharacter such as "*", and nest it within other groups (capturing or non-capturing). (?:...) is particularly useful when modifying an existing group, since you can add new groups without changing how all the other groups are numbered. It should be mentioned that there's no performance difference in searching between capturing and non-capturing groups; neither form is any faster than the other.
除了捕获匹配组的内容之外,无捕获组与捕获组表现完全一样;你可以在其中放置任何字符,可以用重复元字符如 "*" 来重复它,可以在其他组(无捕获组与捕获组)中嵌套它。(?:...) 对于修改已有组尤其有用,因为你可以不用改变所有其他组号的情况下添加一个新组。捕获组和无捕获组在搜索效率方面也没什么不同,没有哪一个比另一个更快。

The second, and more significant, feature is named groups; instead of referring to them by numbers, groups can be referenced by a name.
其次,更重要和强大的是命名组;与用数字指定组不同的是,它可以用名字来指定。

The syntax for a named group is one of the Python-specific extensions: (?P<name>...). name is, obviously, the name of the group. Except for associating a name with a group, named groups also behave identically to capturing groups. The `MatchObject` methods that deal with capturing groups all accept either integers, to refer to groups by number, or a string containing the group name. Named groups are still given numbers, so you can retrieve information about a group in two ways:
命令组的语法是 Python 专用扩展之一: (?P<name>...)。名字很明显是组的名字。除了该组有个名字之外,命名组也同捕获组是相同的。`MatchObject` 的方法处理捕获组时接受的要么是表示组号的整数,要么是包含组名的字符串。命名组也可以是数字,所以你可以通过两种方式来得到一个组的信息:

#!python
>>> p = re.compile(r'(?P<word>\b\w+\b)')
>>> m = p.search( '(((( Lots of punctuation )))' )
>>> m.group('word')
'Lots'
>>> m.group(1)
'Lots'

Named groups are handy because they let you use easily-remembered names, instead of having to remember numbers. Here's an example RE from the imaplib module:
命名组是便于使用的,因为它可以让你使用容易记住的名字来代替不得不记住的数字。这里有一个来自 imaplib 模块的 RE 示例:

#!python
InternalDate = re.compile(r'INTERNALDATE "'
r'(?P<day>[ 123][0-9])-(?P<mon>[A-Z][a-z][a-z])-'
	r'(?P<year>[0-9][0-9][0-9][0-9])'
r' (?P<hour>[0-9][0-9]):(?P<min>[0-9][0-9]):(?P<sec>[0-9][0-9])'
r' (?P<zonen>[-+])(?P<zoneh>[0-9][0-9])(?P<zonem>[0-9][0-9])'
r'"')

It's obviously much easier to retrieve m.group('zonem'), instead of having to remember to retrieve group 9.
很明显,得到 m.group('zonem') 要比记住得到组 9 要容易得多。

Since the syntax for backreferences, in an expression like (...)\1, refers to the number of the group there's naturally a variant that uses the group name instead of the number. This is also a Python extension: (?P=name) indicates that the contents of the group called name should again be found at the current point. The regular expression for finding doubled words, (\b\w+)\s+\1 can also be written as (?P<word>\b\w+)\s+(?P=word):
因为逆向引用的语法,象 (...)\1 这样的表达式所表示的是组号,这时用组名代替组号自然会有差别。还有一个 Python 扩展:(?P=name) ,它可以使叫 name 的组内容再次在当前位置发现。正则表达式为了找到重复的单词,(\b\w+)\s+\1 也可以被写成 (?P<word>\b\w+)\s+(?P=word):

#!python
>>> p = re.compile(r'(?P<word>\b\w+)\s+(?P=word)')
>>> p.search('Paris in the the spring').group()
'the the'

Lookahead Assertions(前向界定符)

Another zero-width assertion is the lookahead assertion. Lookahead assertions are available in both positive and negative form, and look like this:
另一个零宽界定符(zero-width assertion)是前向界定符。前向界定符包括前向肯定界定符和后向肯定界定符,所下所示:

(?=...) Positive lookahead assertion. This succeeds if the contained regular expression, represented here by ..., successfully matches at the current location, and fails otherwise. But, once the contained expression has been tried, the matching engine doesn't advance at all; the rest of the pattern is tried right where the assertion started.
前向肯定界定符。如果所含正则表达式,以 ... 表示,在当前位置成功匹配时成功,否则失败。但一旦所含表达式已经尝试,匹配引擎根本没有提高;模式的剩余部分还要尝试界定符的右边。

(?!...) Negative lookahead assertion. This is the opposite of the positive assertion; it succeeds if the contained expression doesn't match at the current position in the string.
前向否定界定符。与肯定界定符相反;当所含表达式不能在字符串当前位置匹配时成功

An example will help make this concrete by demonstrating a case where a lookahead is useful. Consider a simple pattern to match a filename and split it apart into a base name and an extension, separated by a ".". For example, in "news.rc", "news" is the base name, and "rc" is the filename's extension.
通过示范在哪前向可以成功有助于具体实现。考虑一个简单的模式用于匹配一个文件名,并将其通过 "." 分成基本名和扩展名两部分。如在 "news.rc" 中,"news" 是基本名,"rc" 是文件的扩展名。

The pattern to match this is quite simple:
匹配模式非常简单:

.*[.].*$

Notice that the "." needs to be treated specially because it's a metacharacter; I've put it inside a character class. Also notice the trailing $; this is added to ensure that all the rest of the string must be included in the extension. This regular expression matches "foo.bar" and "autoexec.bat" and "sendmail.cf" and "printers.conf".
注意 "." 需要特殊对待,因为它是一个元字符;我把它放在一个字符类中。另外注意后面的 $; 添加这个是为了确保字符串所有的剩余部分必须被包含在扩展名中。这个正则表达式匹配 "foo.bar"、"autoexec.bat"、 "sendmail.cf" 和 "printers.conf"。

Now, consider complicating the problem a bit; what if you want to match filenames where the extension is not "bat"? Some incorrect attempts:
现在,考虑把问题变得复杂点;如果你想匹配的扩展名不是 "bat" 的文件名?一些不正确的尝试:

.*[.][^b].*$

The first attempt above tries to exclude "bat" by requiring that the first character of the extension is not a "b". This is wrong, because the pattern also doesn't match "foo.bar".
上面的第一次去除 "bat" 的尝试是要求扩展名的第一个字符不是 "b"。这是错误的,因为该模式也不能匹配 "foo.bar"。

.*[.]([<sup>b]..|.[^a].|..[</sup>t])$

The expression gets messier when you try to patch up the first solution by requiring one of the following cases to match: the first character of the extension isn't "b"; the second character isn't "a"; or the third character isn't "t". This accepts "foo.bar" and rejects "autoexec.bat", but it requires a three-letter extension and won't accept a filename with a two-letter extension such as "sendmail.cf". We'll complicate the pattern again in an effort to fix it.
当你试着修补第一个解决方法而要求匹配下列情况之一时表达式更乱了:扩展名的第一个字符不是 "b"; 第二个字符不是 "a";或第三个字符不是 "t"。这样可以接受 "foo.bar" 而拒绝 "autoexec.bat",但这要求只能是三个字符的扩展名而不接受两个字符的扩展名如 "sendmail.cf"。我们将在努力修补它时再次把该模式变得复杂。

.*[.]([<sup>b].?.?|.[^a]?.?|..?[</sup>t]?)$

In the third attempt, the second and third letters are all made optional in order to allow matching extensions shorter than three characters, such as "sendmail.cf".
在第三次尝试中,第二和第三个字母都变成可选,为的是允许匹配比三个字符更短的扩展名,如 "sendmail.cf"。

The pattern's getting really complicated now, which makes it hard to read and understand. Worse, if the problem changes and you want to exclude both "bat" and "exe" as extensions, the pattern would get even more complicated and confusing.
该模式现在变得非常复杂,这使它很难读懂。更糟的是,如果问题变化了,你想扩展名不是 "bat" 和 "exe",该模式甚至会变得更复杂和混乱。

A negative lookahead cuts through all this:
前向否定把所有这些裁剪成:

.*[.](?!bat$).*$

The lookahead means: if the expression bat doesn't match at this point, try the rest of the pattern; if bat$ does match, the whole pattern will fail. The trailing $ is required to ensure that something like "sample.batch", where the extension only starts with "bat", will be allowed.
前向的意思:如果表达式 bat 在这里没有匹配,尝试模式的其余部分;如果 bat$ 匹配,整个模式将失败。后面的 $ 被要求是为了确保象 "sample.batch" 这样扩展名以 "bat" 开头的会被允许。

Excluding another filename extension is now easy; simply add it as an alternative inside the assertion. The following pattern excludes filenames that end in either "bat" or "exe":
将另一个文件扩展名排除在外现在也容易;简单地将其做为可选项放在界定符中。下面的这个模式将以 "bat" 或 "exe" 结尾的文件名排除在外。

.*[.](?!bat$|exe$).*$

Modifying Strings(修改字符串)

Up to this point, we've simply performed searches against a static string. Regular expressions are also commonly used to modify a string in various ways, using the following `RegexObject` methods:
到目前为止,我们简单地搜索了一个静态字符串。正则表达式通常也用不同的方式,通过下面的 `RegexObject` 方法,来修改字符串。

Method/Attribute(方法/属性) Purpose(作用)
split() Split the string into a list, splitting it wherever the RE matches
将字符串在 RE 匹配的地方分片并生成一个列表,
sub() Find all substrings where the RE matches, and replace them with a different string
找到 RE 匹配的所有子串,并将其用一个不同的字符串替换
subn() Does the same thing as sub(), but returns the new string and the number of replacements
与 sub() 相同,但返回新的字符串和替换次数

Splitting Strings(将字符串分片)

The split() method of a `RegexObject` splits a string apart wherever the RE matches, returning a list of the pieces. It's similar to the split() method of strings but provides much more generality in the delimiters that you can split by; split() only supports splitting by whitespace or by a fixed string. As you'd expect, there's a module-level re.split() function, too.
`RegexObject` 的 split() 方法在 RE 匹配的地方将字符串分片,将返回列表。它同字符串的 split() 方法相似但提供更多的定界符;split()只支持空白符和固定字符串。就象你预料的那样,也有一个模块级的 re.split() 函数。

split(string [, maxsplit = 0])

Split string by the matches of the regular expression. If capturing parentheses are used in the RE, then their contents will also be returned as part of the resulting list. If maxsplit is nonzero, at most maxsplit splits are performed.
通过正则表达式将字符串分片。如果捕获括号在 RE 中使用,那么它们的内容也会作为结果列表的一部分返回。如果 maxsplit 非零,那么最多只能分出 maxsplit 个分片。

You can limit the number of splits made, by passing a value for maxsplit. When maxsplit is nonzero, at most maxsplit splits will be made, and the remainder of the string is returned as the final element of the list. In the following example, the delimiter is any sequence of non-alphanumeric characters.
你可以通过设置 maxsplit 值来限制分片数。当 maxsplit 非零时,最多只能有 maxsplit 个分片,字符串的其余部分被做为列表的最后部分返回。在下面的例子中,定界符可以是非数字字母字符的任意序列。

#!python
>>> p = re.compile(r'\W+')
>>> p.split('This is a test, short and sweet, of split().')
['This', 'is', 'a', 'test', 'short', 'and', 'sweet', 'of', 'split', '']
>>> p.split('This is a test, short and sweet, of split().', 3)
['This', 'is', 'a', 'test, short and sweet, of split().']

Sometimes you're not only interested in what the text between delimiters is, but also need to know what the delimiter was. If capturing parentheses are used in the RE, then their values are also returned as part of the list. Compare the following calls:
有时,你不仅对定界符之间的文本感兴趣,也需要知道定界符是什么。如果捕获括号在 RE 中使用,那么它们的值也会当作列表的一部分返回。比较下面的调用:

#!python
>>> p = re.compile(r'\W+')
>>> p2 = re.compile(r'(\W+)')
>>> p.split('This... is a test.')
['This', 'is', 'a', 'test', '']
>>> p2.split('This... is a test.')
['This', '... ', 'is', ' ', 'a', ' ', 'test', '.', '']

The module-level function re.split() adds the RE to be used as the first argument, but is otherwise the same.
模块级函数 re.split() 将 RE 作为第一个参数,其他一样。

#!python
>>> re.split('[\W]+', 'Words, words, words.')
['Words', 'words', 'words', '']
>>> re.split('([\W]+)', 'Words, words, words.')
['Words', ', ', 'words', ', ', 'words', '.', '']
>>> re.split('[\W]+', 'Words, words, words.', 1)
['Words', 'words, words.']

Search and Replace(搜索和替换)

Another common task is to find all the matches for a pattern, and replace them with a different string. The sub() method takes a replacement value, which can be either a string or a function, and the string to be processed.
其他常见的用途就是找到所有模式匹配的字符串并用不同的字符串来替换它们。sub() 方法提供一个替换值,可以是字符串或一个函数,和一个要被处理的字符串。

sub(replacement, string[, count = 0])

Returns the string obtained by replacing the leftmost non-overlapping occurrences of the RE in string by the replacement replacement. If the pattern isn't found, string is returned unchanged.
返回的字符串是在字符串中用 RE 最左边不重复的匹配来替换。如果模式没有发现,字符将被没有改变地返回。

The optional argument count is the maximum number of pattern occurrences to be replaced; count must be a non-negative integer. The default value of 0 means to replace all occurrences.
可选参数 count 是模式匹配后替换的最大次数;count 必须是非负整数。缺省值是 0 表示替换所有的匹配。

Here's a simple example of using the sub() method. It replaces colour names with the word "colour":
这里有个使用 sub() 方法的简单例子。它用单词 "colour" 替换颜色名。

#!python
>>> p = re.compile( '(blue|white|red)')
>>> p.sub( 'colour', 'blue socks and red shoes')
'colour socks and colour shoes'
>>> p.sub( 'colour', 'blue socks and red shoes', count=1)
'colour socks and red shoes'

The subn() method does the same work, but returns a 2-tuple containing the new string value and the number of replacements that were performed:
subn() 方法作用一样,但返回的是包含新字符串和替换执行次数的两元组。

#!python
>>> p = re.compile( '(blue|white|red)')
>>> p.subn( 'colour', 'blue socks and red shoes')
('colour socks and colour shoes', 2)
>>> p.subn( 'colour', 'no colours at all')
('no colours at all', 0)

Empty matches are replaced only when they're not adjacent to a previous match.
空匹配只有在它们没有紧挨着前一个匹配时才会被替换掉。

#!python
>>> p = re.compile('x*')
>>> p.sub('-', 'abxd')
'-a-b-d-'

If replacement is a string, any backslash escapes in it are processed. That is, "\n" is converted to a single newline character, "\r" is converted to a carriage return, and so forth. Unknown escapes such as "\j" are left alone. Backreferences, such as "\6", are replaced with the substring matched by the corresponding group in the RE. This lets you incorporate portions of the original text in the resulting replacement string.
如果替换的是一个字符串,任何在其中的反斜杠都会被处理。"\n" 将会被转换成一个换行符,"\r"转换成回车等等。未知的转义如 "\j" 是 left alone。逆向引用,如 "\6",被 RE 中相应的组匹配而被子串替换。这使你可以在替换后的字符串中插入原始文本的一部分。

This example matches the word "section" followed by a string enclosed in "{", "}", and changes "section" to "subsection":
这个例子匹配被 "{" 和 "}" 括起来的单词 "section",并将 "section" 替换成 "subsection"。

#!python
>>> p = re.compile('section{ ( [^}]* ) }', re.VERBOSE)
>>> p.sub(r'subsection{\1}','section{First} section{second}')
'subsection{First} subsection{second}'

There's also a syntax for referring to named groups as defined by the (?P<name>...) syntax. "\g<name>" will use the substring matched by the group named "name", and "\g<number>" uses the corresponding group number. "\g<2>" is therefore equivalent to "\2", but isn't ambiguous in a replacement string such as "\g<2>0". ("\20" would be interpreted as a reference to group 20, not a reference to group 2 followed by the literal character "0".) The following substitutions are all equivalent, but use all three variations of the replacement string.
还可以指定用 (?P<name>...) 语法定义的命名组。"\g<name>" 将通过组名 "name" 用子串来匹配,并且 "\g<number>" 使用相应的组号。所以 "\g<2>" 等于 "\2",但能在替换字符串里含义不清,如 "\g<2>0"。("\20" 被解释成对组 20 的引用,而不是对后面跟着一个字母 "0" 的组 2 的引用。)

#!python
>>> p = re.compile('section{ (?P<name> [^}]* ) }', re.VERBOSE)
>>> p.sub(r'subsection{\1}','section{First}')
'subsection{First}'
>>> p.sub(r'subsection{\g<1>}','section{First}')
'subsection{First}'
>>> p.sub(r'subsection{\g<name>}','section{First}')
'subsection{First}'

replacement can also be a function, which gives you even more control. If replacement is a function, the function is called for every non-overlapping occurrence of pattern. On each call, the function is passed a `MatchObject` argument for the match and can use this information to compute the desired replacement string and return it.
替换也可以是一个甚至给你更多控制的函数。如果替换是个函数,该函数将会被模式中每一个不重复的匹配所调用。在每个调用时,函数被作为 `MatchObject` 的匹配函属,并可以使用这个信息去计算预期的字符串并返回它。

In the following example, the replacement function translates decimals into hexadecimal:
在下面的例子里,替换函数将十进制翻译成十六进制:

#!python
>>> def hexrepl( match ):
...     "Return the hex string for a decimal number"
...     value = int( match.group() )
...     return hex(value)
...
>>> p = re.compile(r'\d+')
>>> p.sub(hexrepl, 'Call 65490 for printing, 49152 for user code.')
'Call 0xffd2 for printing, 0xc000 for user code.'

When using the module-level re.sub() function, the pattern is passed as the first argument. The pattern may be a string or a `RegexObject`; if you need to specify regular expression flags, you must either use a `RegexObject` as the first parameter, or use embedded modifiers in the pattern, e.g. sub("(?i)b+", "x", "bbbb BBBB") returns 'x x'.
当使用模块级的 re.sub() 函数时,模式作为第一个参数。模式也许是一个字符串或一个 `RegexObject`;如果你需要指定正则表达式标志,你必须要么使用 `RegexObject` 做第一个参数,或用使用模式内嵌修正器,如 sub("(?i)b+", "x", "bbbb BBBB") returns 'x x'。

Common Problems(常见问题)

Regular expressions are a powerful tool for some applications, but in some ways their behaviour isn't intuitive and at times they don't behave the way you may expect them to. This section will point out some of the most common pitfalls.
正则表达式对一些应用程序来说是一个强大的工具,但在有些时候它并不直观而且有时它们不按你期望的运行。本节将指出一些最容易犯的常见错误。

Use String Methods(使用字符串方式)

Sometimes using the re module is a mistake. If you're matching a fixed string, or a single character class, and you're not using any re features such as the IGNORECASE flag, then the full power of regular expressions may not be required. Strings have several methods for performing operations with fixed strings and they're usually much faster, because the implementation is a single small C loop that's been optimized for the purpose, instead of the large, more generalized regular expression engine.
有时使用 re 模块是个错误。如果你匹配一个固定的字符串或单个的字符类,并且你没有使用 re 的任何象 IGNORECASE 标志的功能,那么就没有必要使用正则表达式了。字符串有一些方法是对固定字符串进行操作的,它们通常快很多,因为都是一个个经过优化的C 小循环,用以代替大的、更具通用性的正则表达式引擎。

One example might be replacing a single fixed string with another one; for example, you might replace "word" with "deed". re.sub() seems like the function to use for this, but consider the replace() method. Note that replace() will also replace "word" inside words, turning "swordfish" into "sdeedfish", but the naïve RE word would have done that, too. (To avoid performing the substitution on parts of words, the pattern would have to be \bword\b, in order to require that "word" have a word boundary on either side. This takes the job beyond replace's abilities.)
举个用一个固定字符串替换另一个的例子;如,你可以把 "deed" 替换成 "word"。re.sub() seems like the function to use for this, but consider the replace() method. 注意 replace() 也可以在单词里面进行替换,可以把 "swordfish" 变成 "sdeedfish",不过 RE 也是可以做到的。(为了避免替换单词的一部分,模式将写成 \bword\b,这是为了要求 "word" 两边有一个单词边界。这是个超出替换能力的工作)。

Another common task is deleting every occurrence of a single character from a string or replacing it with another single character. You might do this with something like re.sub('\n', ' ', S), but translate() is capable of doing both tasks and will be faster that any regular expression operation can be.
另一个常见任务是从一个字符串中删除单个字符或用另一个字符来替代它。你也许可以用象 re.sub('\n',' ',S) 这样来实现,但 translate() 能够实现这两个任务,而且比任何正则表达式操作起来更快。

In short, before turning to the re module, consider whether your problem can be solved with a faster and simpler string method.
总之,在使用 re 模块之前,先考虑一下你的问题是否可以用更快、更简单的字符串方法来解决。

match() versus search()(match() vs search())

The match() function only checks if the RE matches at the beginning of the string while search() will scan forward through the string for a match. It's important to keep this distinction in mind. Remember, match() will only report a successful match which will start at 0; if the match wouldn't start at zero, match() will not report it.
match() 函数只检查 RE 是否在字符串开始处匹配,而 search() 则是扫描整个字符串。记住这一区别是重要的。记住,match() 只报告一次成功的匹配,它将从 0 处开始;如果匹配不是从 0 开始的,match() 将不会报告它。

#!python
>>> print re.match('super', 'superstition').span()
(0, 5)
>>> print re.match('super', 'insuperable')
None

On the other hand, search() will scan forward through the string, reporting the first match it finds.
另一方面,search() 将扫描整个字符串,并报告它找到的第一个匹配。

#!python
>>> print re.search('super', 'superstition').span()
(0, 5)
>>> print re.search('super', 'insuperable').span()
(2, 7)

Sometimes you'll be tempted to keep using re.match(), and just add .* to the front of your RE. Resist this temptation and use re.search() instead. The regular expression compiler does some analysis of REs in order to speed up the process of looking for a match. One such analysis figures out what the first character of a match must be; for example, a pattern starting with Crow must match starting with a "C". The analysis lets the engine quickly scan through the string looking for the starting character, only trying the full match if a "C" is found.
有时你想用 re.match(),并将 .* 添加到你 RE 的前面。不会做这样的尝试并用 re.search() 来代替它。正则表达式编译器会对 REs 做一些分析以便可以在查找匹配时提高处理速度。一个那样的分析机会指出匹配的第一个字符是什么;举个例子,模式 Crow 必须从 "C" 开始匹配。分析机可以让引擎快速扫描字符串以找到开始字符,并只在 "C" 被发现后才开始全部匹配。

Adding .* defeats this optimization, requiring scanning to the end of the string and then backtracking to find a match for the rest of the RE. Use re.search() instead.
添加 .* 会使这个优化失败,这就要扫描到字符串尾部,然后回溯以找到 RE 剩余部分的匹配。使用 re.search() 代替。

Greedy versus Non-Greedy(贪婪 vs 不贪婪)

When repeating a regular expression, as in a*, the resulting action is to consume as much of the pattern as possible. This fact often bites you when you're trying to match a pair of balanced delimiters, such as the angle brackets surrounding an HTML tag. The naïve pattern for matching a single HTML tag doesn't work because of the greedy nature of .*.
当重复一个正则表达式时,如用 a*,操作结果是尽可能多地匹配模式。当你试着匹配一对对称的定界符,如 HTML 标志中的尖括号时这个事实经常困扰你。匹配单个 HTML 标志的模式不能正常工作,因为 .* 的本质是“贪婪”的

#!python
>>> s = '<html><head><title>Title</title>'
>>> len(s)
32
>>> print re.match('<.*>', s).span()
(0, 32)
>>> print re.match('<.*>', s).group()
<html><head><title>Title</title>

The RE matches the "<" in "", and the .* consumes the rest of the string. There's still more left in the RE, though, and the > can't match at the end of the string, so the regular expression engine has to backtrack character by character until it finds a match for the >. The final match extends from the "<" in ""to the ">" in "", which isn't what you want.
RE 匹配 在 "" 中的 "<",.* 消耗掉子符串的剩余部分。在 RE 中保持更多的左,虽然 > 不能匹配在字符串结尾,因此正则表达式必须一个字符一个字符地回溯,直到它找到 > 的匹配。最终的匹配从 "" 中的 ">",这并不是你所想要的结果。 In this case, the solution is to use the non-greedy qualifiers *?, +?, ??, or {m,n}?, which match as little text as possible. In the above example, the ">" is tried immediately after the first "<" matches, and when it fails, the engine advances a character at a time, retrying the ">" at every step. This produces just the right result:
在这种情况下,解决方案是使用不贪婪的限定符 *?、+?、?? 或 {m,n}?,尽可能匹配小的文本。在上面的例子里, ">" 在第一个 "<" 之后被立即尝试,当它失败时,引擎一次增加一个字符,并在每步重试 ">"。这个处理将得到正确的结果:

#!python
>>> print re.match('<.*?>', s).group()

(Note that parsing HTML or XML with regular expressions is painful. Quick-and-dirty patterns will handle common cases, but HTML and XML have special cases that will break the obvious regular expression; by the time you've written a regular expression that handles all of the possible cases, the patterns will be very complicated. Use an HTML or XML parser module for such tasks.)
(注意用正则表达式分析 HTML 或 XML 是痛苦的。变化混乱的模式将处理常见情况,但 HTML 和 XML 则是明显会打破正则表达式的特殊情况;当你编写一个正则表达式去处理所有可能的情况时,模式将变得非常复杂。象这样的任务用 HTML 或 XML 解析器。 ==== Not Using re.VERBOSE(不用 re.VERBOSE) ==== By now you've probably noticed that regular expressions are a very compact notation, but they're not terribly readable. REs of moderate complexity can become lengthy collections of backslashes, parentheses, and metacharacters, making them difficult to read and understand.
现在你可能注意到正则表达式的表示是十分紧凑,但它们非常不好读。中度复杂的 REs 可以变成反斜杠、圆括号和元字符的长长集合,以致于使它们很难读懂。 For such REs, specifying the re.VERBOSE flag when compiling the regular expression can be helpful, because it allows you to format the regular expression more clearly.
在这些 REs 中,当编译正则表达式时指定 re.VERBOSE 标志是有帮助的,因为它允许你可以编辑正则表达式的格式使之更清楚。 The re.VERBOSE flag has several effects. Whitespace in the regular expression that isn't inside a character class is ignored. This means that an expression such as dog | cat is equivalent to the less readable dog|cat, but [a b] will still match the characters "a", "b", or a space. In addition, you can also put comments inside a RE; comments extend from a "#" character to the next newline. When used with triple-quoted strings, this enables REs to be formatted more neatly:
re.VERBOSE 标志有这么几个作用。在正则表达式中不在字符类中的空白符被忽略。这就意味着象 dog | cat 这样的表达式和可读性差的 dog|cat 相同,但 [a b] 将匹配字符 "a"、"b" 或 空格。另外,你也可以把注释放到 RE 中;注释是从 "#" 到下一行。当使用三引号字符串时,可以使 REs 格式更加干净:
#!python
pat = re.compile(r"""
\s*                 # Skip leading whitespace
(?P
[^:]+) # Header name \s* : # Whitespace, and a colon (?P.*?) # The header's value -- *? used to # lose the following trailing whitespace \s*$ # Trailing whitespace to end-of-line """, re.VERBOSE)
This is far more readable than:
这个要难读得多:
#!python
pat = re.compile(r"\s*(?P
[^:]+)\s*:(?P.*?)\s*$")
=== Feedback(反馈) === Regular expressions are a complicated topic. Did this document help you understand them? Were there parts that were unclear, or Problems you encountered that weren't covered here? If so, please send suggestions for improvements to the author.
正则表达式是一个复杂的主题。本文能否有助于你理解呢?那些部分是否不清晰,或在这儿没有找到你所遇到的问题?如果是那样的话,请将建议发给作者以便改进。 The most complete book on regular expressions is almost certainly Jeffrey Friedl's Mastering Regular Expressions, published by O'Reilly. Unfortunately, it exclusively concentrates on Perl and Java's flavours of regular expressions, and doesn't contain any Python material at all, so it won't be useful as a reference for programming in Python. (The first edition covered Python's now-obsolete regex module, which won't help you much.) Consider checking it out from your library.
描述正则表达式最全面的书非Jeffrey Friedl 写的《精通正则表达式》莫属,该书由O'Reilly 出版。可惜该书只专注于 Perl 和 Java 风格的正则表达式,不含任何 Python 材料,所以不足以用作Python编程时的参考。(第一版包含有 Python 现已过时的 regex 模块,自然用处不大)。 === About this document(关于本文档) === This document was generated using the LaTeX2HTML translator.
本文档使用 LaTeX2HTML 转换器生成。 LaTeX2HTML is Copyright © 1993, 1994, 1995, 1996, 1997, Nikos Drakos, Computer Based Learning Unit, University of Leeds, and Copyright © 1997, 1998, Ross Moore, Mathematics Department, Macquarie University, Sydney. The application of LaTeX2HTML to the Python documentation has been heavily tailored by Fred L. Drake, Jr. Original navigation icons were contributed by Christopher Petrilli.