mņ …õOc@s±dZdklZy-dkZdklZdklZeZ Wne j o e Z nXdddgZ defd „ƒYZ d efd „ƒYZd efd „ƒYZdS(s@Lexical analysis of formal languages (i.e. code) using Pygments.(sApplicationErrorN(sget_lexer_by_name(s_get_ttype_classttokenttexttt LexerErrorcBstZRS(N(t__name__t __module__(((tv/home/project-web/docutils/web-update/infrastructure/update-dir/aux/snapshots/docutils/docutils/utils/code_analyzer.pyRstLexercBs,tZdZdd„Zd„Zd„ZRS(s”Parse `code` lines and yield "classified" tokens. Arguments code -- string of source code to parse, language -- formal language the code is written in, tokennames -- either 'long', 'short', or '' (see below). Merge subsequent tokens of the same token-type. Iterating over an instance yields the tokens as ``(tokentype, value)`` tuples. The value of `tokennames` configures the naming of the tokentype: 'long': downcased full token type name, 'short': short name defined by pygments.token.STANDARD_TYPES (= class argument used in pygments html output), 'none': skip lexical analysis. tshortcCs£||_||_||_d|_|djp |djodSntptdƒ‚nyt|iƒ|_Wn)t i i j otd|ƒ‚nXdS(sE Set up a lexical analyzer for `code` in `language`. RRtnoneNs0Cannot analyze code. Pygments package not found.s6Cannot analyze code. No Pygments lexer found for "%s".(Rstext( tcodetselftlanguaget tokennamestNonetlexert with_pygmentsRtget_lexer_by_nametpygmentstutilt ClassNotFound(R R R R ((Rt__init__0s    ccsžt|ƒ}|iƒ\}}xF|D]>\}}||jo||7}q%||fV||}}q%W|idƒo|d }n|o||fVndS(srMerge subsequent tokens of same token-type. Also strip the final newline (added by pygments). s i’’’’N(titerttokenstnexttlasttypetlastvaltttypetvaluetendswith(R RRRRR((RtmergeHs    ccsŪ|idjog|ifVdSnti|i|iƒ}x—|i|ƒD]†\}}|i djot |ƒi ƒi dƒ}nt|ƒg}g}|D]}|tjo ||q¢q¢~}||fVqMWdS(s7Parse self.code and yield "classified" tokens. Ntlongt.(R RRR RtlexRRt tokentypeRR tstrtlowertsplittclassest_get_ttype_classt_[1]tclstunstyled_tokens(R R(R"RRR&R)((Rt__iter__Zs  2(RRt__doc__RRR+(((RRs   t NumberLinescBs tZdZd„Zd„ZRS(stInsert linenumber-tokens at the start of every code line. Arguments tokens -- iterable of ``(classes, value)`` tuples startline -- first line number endline -- last line number Iterating over an instance yields the tokens with a ``(['ln'], '')`` token added for every code line. Multi-line tokens are splitted.cCs/||_||_dtt|ƒƒ|_dS(Ns%%%dd (RR t startlinetlenR#tendlinetfmt_str(R RR.R0((RRws  ccs™|i}dg|i|fVxu|iD]j\}}|idƒ}x>|d D]2}||dfV|d7}dg|i|fVqMW||dfVq'WdS(Ntlns i’’’’i( R R.tlinenoR1RRRR%tlinestline(R R4RR3RR5((RR+}s    (RRR,RR+(((RR-js  (R,tdocutilstApplicationErrorRtpygments.lexersRtpygments.formatters.htmlR'tTrueRt ImportErrortFalseR*RtobjectRR-( R7R'RRRRR*R-R((Rt?s      N