SpecialistOff.NET / Вопросы / Статьи / Фрагменты кода / Резюме / Метки / Помощь / Файлы
Назад竜 TatSu uses a variant of the standard EBNF syntax. Syntax definitions for VIM and for Sublime Text can be found under the etc/vim and etc/sublime directories in the source code distribution.
Граматика состоит из последовательности одного или нескольких правил вида:
name = <expre> ;
If a name collides with a Python keyword, an underscore (_) will be appended to it on the generated parser.
Rule names that start with an uppercase character:
FRAGMENT = /[a-z]+/ ;
do not advance over whitespace before beginning to parse. This feature becomes handy when defining complex lexical elements, as it allows breaking them into several rules.
The parser returns an AST value for each rule depending on what was parsed:
See the Abstract Syntax Trees and Building Models sections for more details.
Выражения в порядке обратного приоритета:
# commentPython-style comments are allowed.
e1 | e2Выбор. Совпадает либо с
e1либо сe2.A | be be used before the first option if desired:
choices
=
| e1
| e2
| e3
;
e1 e2Последовательность. Matche1and then matche2.
( e )Группировка. Matche. For example:('a' | 'b').
[ e ]Необязательно совпадает сe.
{ e } or { e }*{ e }+{}Empty closure. Match nothing and produce an emptylistas AST.
~The cut expression. Commit to the current option and prevent other options from being considered even if what follows fails to parse.
In this example, other options won’t be considered if a parenthesis is parsed:
atom
=
'(' ~ @:expre ')'
| int
| bool
;
s%{ e }+Положительное присоединение. Inspired by Python’sstr.join(), it parses the same as this expression:
e {s ~ e}
yet the result is a single list of the form:
[e, s, e, s, e....]
Use grouping if s is more complex than a token or a pattern:
(s t)%{ e }+
s%{ e } or s%{ e }*Join. Parses the list of
s-separated expressions, or the empty closure.It is equivalent to:
s%{e}+|{}
op<{ e }+Left join. Like the join expression, but the result is a left-associative tree built with
tuple(), in wich the first element is the separator (op), and the other two elements are the operands.The expression:
'+'<{/\d+/}+
Will parse this input:
1 + 2 + 3 + 4
To this tree:
(
'+',
(
'+',
(
'+',
'1',
'2'
),
'3'
),
'4'
)
op>{ e }+Right join. Like the join expression, but the result is a right-associative tree built with
tuple(), in wich the first element is the separator (op), and the other two elements are the operands.The expression:
'+'>{/\d+/}+
Will parse this input:
1 + 2 + 3 + 4
To this tree:
(
'+',
'1',
(
'+',
'2',
(
'+',
'3',
'4'
)
)
)
s.{ e }+Positive gather. Like positive join, but the separator is not included in the resulting AST.
s.{ e } or s.{ e }*Gather. Like the join, but the separator is not included in the resulting AST.
It is equivalent to:
s.{e}+|{}
&ePositive lookahead. Succeed ifecan be parsed, but do not consume any input.
!eNegative lookahead. Fail ifecan be parsed, and do not consume any input.
'text' or "text"Match the token text within the quotation marks.
Note that if text is alphanumeric, then 竜 TatSu will check that the character following the token is not alphanumeric. This is done to prevent tokens like IN matching when the text ahead is INITIALIZE. This feature can be turned off by passing
nameguard=Falseto theParseror theBuffer, or by using a pattern expression (see below) instead of a token expression. Alternatively, the@@nameguardor@@namecharsdirectives may be specified in the grammar:
@@nameguard :: False
or to specify additional characters that should also be considered part of names:
@@namechars :: '$-.'
r'text' or r"text"Match the token text within the quotation marks, interpreting text like Python’s raw string literals.
?"regexp" or ?'regexp' or /regexp/The pattern expression. Match the Python regular expression
regexpat the current text position. Unlike other expressions, this one does not advance over whitespace or comments. For that, place theregexpas the only term in its own rule.The regex is interpreted as a Python’s raw string literal and passed with
regexp.MULTILINE | regexp.UNICODEoptions to the Python re module (or to regex, if available), usingmatch()at the current position in the text. The matched text is the AST for the expression.Consecutive patterns are concatenated to form a single one.
/./The any expression, matches the next position in the input. It works exactly like the?'.'pattern, but is implemented at the buffer level, without regular expressions.
->eThe “skip to” expression; useful for writing recovery rules.
The parser will advance over input, one character at time, until
ematches. Whitespace and comments will be skipped at each step. Advancing over input is done efficiently, with no regular expressions are involved.The expression is equivalent to:
{ !e /./ } e
A common form of the expression is->&e, which is equivalent to:
{ !e /./ } &e
This is an example of the use of the “skip to” expression for recovery:
statement =
| if_statement
# ...
;
if_statement
=
| 'if' condition 'then' statement ['else' statement]
| 'if' statement_recovery
;
statement_recovery = ->&statement ;
`constant`Match nothing, but behave as if
constanthad been parsed.Constants can be used to inject elements into the concrete and abstract syntax trees, perhaps avoiding having to write a semantic action. For example:
boolean_option = name ['=' (boolean|`true`) ] ;
rulenameInvoke the rule namedrulename. To help with lexical aspects of grammars, rules with names that begin with an uppercase letter will not advance the input over whitespace or comments.
>rulenameThe include operator. Include the right hand side of rule
rulenameat this point.The following set of declarations:
includable = exp1 ;
expanded = exp0 >includable exp2 ;
Has the same effect as defining expanded as:
expanded = exp0 exp1 exp2 ;
Note that the included rule must be defined before the rule that includes it.
()The empty expression. Succeed without advancing over input. Its value isNone.
!()The fail expression. This is actually!applied to(), which always fails.
name:ename+:e@:esubexp = '(' @:expre ')' ;
The [AST][Abstract Syntax Tree] returned for thesubexprule will be the [AST][Abstract Syntax Tree] recovered from invoking expre.
@+:eLike
@:e, but make the AST always be alist.This operator is convenient in cases such as:
arglist = '(' @+:arg {',' @+:arg}* ')' ;
In which the delimiting tokens are of no interest.
$The end of text symbol. Verify that the end of the input text has been reached.
When there are no named items in a rule, the AST consists of the elements parsed by the rule, either a single item or a list. This default behavior makes it easier to write simple rules:
number = /[0-9]+/ ;
Without having to write:
number = number:/[0-9]+/ ;
When a rule has named elements, the unnamed ones are excluded from the AST (they are ignored).
竜 TatSu allows rules to specify Python-style arguments:
addition(Add, op='+')
=
addend '+' addend
;
The arguments values are fixed at grammar-compilation time.
An alternative syntax is available if no keyword parameters are required:
addition::Add, '+'
=
addend '+' addend
;
Semantic methods must be ready to receive any arguments declared in the corresponding rule:
def addition(self, ast, name, op=None):
...
When working with rule arguments, it is good to define a _default() method that is ready to take any combination of standard and keyword arguments:
def _default(self, ast, *args, **kwargs):
...
Rules may extend previously defined rules using the < operator. The base rule must be defined previously in the grammar.
The following set of declarations:
base::Param = exp1 ;
extended < base = exp2 ;
Has the same effect as defining extended as:
extended::Param = exp1 exp2 ;
Parameters from the base rule are copied to the new rule if the new rule doesn’t define its own. Repeated inheritance should be possible, but it hasn’t been tested.
A grammar rule may be redefined by using the @override decorator:
start = ab $;
ab = 'xyz' ;
@override
ab = @:'a' {@:'b'} ;
When combined with the #include directive, rule overrides can be used to create a modified grammar without altering the original.
The prefix to be used in classes generated by 竜 TatSu can be passed to the command-line tool using the -m option:
$ tatsu -m MyLanguage mygrammar.ebnf
will generate:
class MyLanguageParser(Parser):
...
The name can also be specified within the grammar using the @@grammar directive:
@@grammar :: MyLanguage
By default, 竜 TatSu generated parsers skip the usual whitespace characters with the regular expression r'\s+' using the re.UNICODE flag (or with the Pattern_White_Space property if the regex module is available), but you can change that behavior by passing a whitespace parameter to your parser.
For example, the following will skip over tab (\t) and space characters, but not so with other typical whitespace characters such as newline (\n):
parser = MyParser(text, whitespace='\t ')
The character string is converted into a regular expression character set before starting to parse.
You can also provide a regular expression directly instead of a string. The following is equivalent to the above example:
parser = MyParser(text, whitespace=re.compile(r'[\t ]+'))
Note that the regular expression must be pre-compiled to let 竜 TatSu distinguish it from plain string.
If you do not define any whitespace characters, then you will have to handle whitespace in your grammar rules (as it’s often done in PEG parsers):
parser = MyParser(text, whitespace='')
Whitespace may also be specified within the grammar using the @@whitespace directive, although any of the above methods will overwrite the setting in the grammar:
@@whitespace :: /[\t ]+/
If the source language is case insensitive, it can be specified in the parser by using the ignorecase parameter:
parser = MyParser(text, ignorecase=True)
You may also specify case insensitivity within the grammar using the @@ignorecase directive:
@@ignorecase :: True
The change will affect token matching, but not pattern matching. Use (?i) in patterns that should ignore case.
Some languages must reserve the use of certain tokens as valid identifiers because the tokens are used to mark particular constructs in the language. Those reserved tokens are known as Reserved Words or Keywords
竜 TatSu provides support for preventing the use of keywords as identifiers though the @@ keyword directive,and the @ name decorator.
A grammar may specify reserved tokens providing a list of them in one or more @@ keyword directives:
@@keyword :: if endif
@@keyword :: else elseif
The @ name decorator checks that the result of a grammar rule does not match a token defined as a keyword:
@name
identifier = /(?!\d)\w+/ ;
There are situations in which a token is reserved only in a very specific context. In those cases, a negative lookahead will prevent the use of the token:
statements = {!'END' statement}+ ;
竜 TatSu grammars support file inclusion through the include directive:
#include :: "filename"
The resolution of the filename is relative to the directory/folder of the source. Absolute paths and ../ navigations are honored.
The functionality required for implementing includes is available to all 竜 TatSu-generated parsers through the Buffer class; see the EBNFBuffer class in the tatsu.parser module for an example.
竜 TatSu supports left recursion in PEG grammars. The algorithm used is Warth et al’s.
Sometimes, while debugging a grammar, it is useful to turn left-recursion support on or off:
parser = MyParser(
text,
left_recursion=True,
)
Left recursion can also be turned off from within the grammar using the @@left_recursion directive:
@@left_recursion :: False
Comments
Parsers will skip over comments specified as a regular expression using the
comments_reparameter:For more complex comment handling, you can override the
Buffer.eat_comments()method.For flexibility, it is possible to specify a pattern for end-of-line comments separately:
Both patterns may also be specified within a grammar using the
@@commentsand@@eol_commentsdirectives: