This program converts XML documents to alternative formats. Currently translations for tex and html format are available, but others are possible.
Access the link here if you want to download the source code of this document. Or you can download the tangled program itself by accessing the link there (you will need to do this first if you want to process the previous file!)
The AXE program converts XML source documents to TeX or HTML documents. To use the program, type
axe -t html file.xml
axe -t tex file.xml
The source document is required only to be well-formed in the XML sense. The translations are defined in an external file, called the translation file. The translation file is identified by composing the document type (the name of the root element of the source XML document) with the target format (current "tex" or "html"). This document is retrieved from a search path specified by the environment variable XMLLIB.
For example, if XMLLIB contains the string
.:~/lib/xml
, and axe is invoked with
axe -t html fred.xml
<article> <section><title>Fred's Article</title> <p>Blah, blah, blah, ...</p> </section> </article>
~/lib/xml
. If it doesn't exist there, it
is an error.
Entries in the translation file fall into three main groups:
For example, if the document to be translated consists of
<uri href="there">Go There</uri>
<A HREF="@<href>">.^^.</A>
<A HREF="there">Go There</A>
Go There
instead contained other
elements, these would be translated according to their
specific rules. This is indicated by the inner context flag
^^
, which indicates where the element content is
to be translated and inserted.
Comments are indicated by a leading non-blank character of
`#
'. Subsequent text up to the end of line is
ignored.
Include commands have the form:
include filename
filename
is a relative or absolute
filename. If relative, it is searched for in the current
directory, i.e., the directory in which axe
was
invoked. (Not any directory visited by other includes
or the directory containing the including file. Note
that it is intended to change this rule.)
These define the translations to be applied upon
recognizing the various start and end tags for each element
in the document. The translation is enclosed in matching
start and end XML tags, and is free format in that blanks
and new lines may be used to improve readability. The
translation itself consists of various replacement fragments
or texts, which can be string literals or code
fragments. The latter are Perl code fragments executed in
the AXE
environment. Each replacement fragment
is separated from adjacent ones by the Perl concatenation
operator ".
"
The translation is usually in three parts, corresponding to
the prefix translation, applied before the element
content is translated, then the element content
itself, then the postfix translation, applied after
the element content is translated. The element content is
indicated either by a bare variable name, where the variable
name is the same as the element tag, or by the characters
^^
. Either the prefix or postfix translations
can be empty.
Where the element content indicator does not appear, the translation is treated as a prefix translation only. Note that this allows means that there is no translation of the element content, and if there is any, it will not appear in the translated document at this point. This is usually used in the case of empty elements, but it can also be used to store content for later use, as any content that does appear will be saved under the eponymous variable name (that is, the variable with the same name as the element tag).
String replacement texts may be singly or doubly quoted, as
in Perl. Singly quoted strings are not evaluated, meaning
that everything within them is treated literally. Doubly
quoted strings are evaluated (in the AXE
environment), which means that variables are replaced by
their values, and escaped characters are replaced by their
escaped values (\n
becomes a new line,
\t
becomes a tab, and so on). A string may be
unquoted, in which case it must not contain any full
stops/periods, and it will not be evaluated for escaped
characters or variables.
Code fragments are enclosed in outer matching curly braces
{}
, and are evaluated in the AXE
environment. Calls to interface routines must therefore be
prefaced with a main::
prefix. Note that the
value of the last expression in the code fragment is the
value of that fragment, and is appended to the replacement
text being assembled. Note that if the code fragment
consists of just a single variable reference, the enclosing
braces may be omitted.
Within the replacement text (all forms), the sequence
@
is replaced by the attribute value
for attr
. If this attribute is not present in
the original XML tag, an error is flagged. The sequence
@?<attr>
is replaced by an expression that
evaluates to true if the attribute is present, false
otherwise.
Examples of string translations:
<uri>
'<A>'
.$uri.
"</A>"
</uri>
|
Replace the XML uri element with the
equivalent HTML element <A>inner text</A>
|
<uri>
<A HREF="@<href>"
.^^.
</A>
</uri>
|
Replace the XML uri
element
<uri href="blah">
inner text
</uri>
with the equivalent HTML element
<A HREF="blah">
inner text
</A>
|
<uri href>
<A HREF="@<href">
.^^.
</A>
</uri>
|
Replace the XML uri
element
<uri href="blah">
inner text
</uri>
with the equivalent HTML element
<A HREF="blah">
inner text
</A>
|
Note that in both of the last two cases an href attribute is required. It is an error if the attribute is not present, in the former case detected by the translator, in the latter by the parser.
Examples of code translations:
<today>$today</today> |
Replace the XML today element
(presumably empty: nothing is done with the content)
with the value of the variable $today
(which has presumably been set somewhere else).
|
<section> {main::enter_section(0)} .^^. {main::exit_section(0)} </section>
|
Call the enter_section
routine before translating the inner content, and the
exit_section after.
|
A translation command has the general form (in BNF)
ElementTranslation = start prefix ['.' inner ['.' postfix]] end | start inner ['.' postfix] end . start = '<' elementname '>' . prefix = translation ('.' translation)* . inner = '^^' | '$' elementname . postfix = translation ('.' translation)* . translation = single | double | code | string | variable | conditional translation . conditional = '?' '<' attributename '>' . single = "'" non-single-quote-char* "'" . double = '"' non-double-or-escaped-char '"' . code = '{' valid-Perl-code-fragment '}' . string = non-period-char* . variable = scalar-Perl-variable . end = '</' elementname '>' .
'
, non-double-or-escaped-char is
any character but "
, or the pair of characters
\
followed by any character, non-period-char is
any character but .
, and scalar-Perl-variable and
valid-Perl-code-fragment are the appropriate Perl
(syntactically correct) elements.
"axe" 1=
This defines all the components of the main program.
<define where is the perl interpreter 2>=
Define the interpreter to use. The -w flag is there mainly out of habit. (It warns if you do various (slightly) naughty things.)
<define package use 3>=
load the XML Parser module, and the Get options module. I looked for the ``short'' options, but apparently it's out of date. FileHandle is now used to open the translations file, to allow nested include files (version 2.7ff).
<process options 4>=
Read and process the options. At the moment, there's three:
<handle auxilary files 5>=
Now grab the (assumed) only parameter, which is the input
source file name, and reconstruct the output file name, with an
extension given by the target language. For example, if you call
this program with the shell line axe -t tex loadpw.xml
,
then output will be written to a file
loadpw.tex
Open the output file for writing.
An auxiliary file AUX
is also opened with
extension .aux
<read and process translation file 6>=
Now open and read the file containing the style transformations. This has a main name of the root element (or otherwise specified explicitly with the -d switch), and an extension corresponding to the target language (determined by the -t switch).
The variable $xflev
is used to count the number of
nested open translation files, and indexes into the stack of open
translation files @xfhandle
%transtable
is the hash used to store all the
translations.
<parse and process the XML file 7>=
This defines all the interface with the XML Parser. We set an
error context of 4 lines. The NoExpand
option
doesn't appear to work. We also set the various handler
linkages, and then do the parse of the input document.
<define all handlers and subroutines 8>=
The translation file is the real secret to this program. It works on the principle that every tag has some form of processing attached to it that defines the style of the output document. By separating the translation specification from this program, we have a more generic program that can be used in a range of contexts. Hence each translation file defines the particular translations needed to map the document into the various formats.
Currently we have TeX and HTML formats defined, but there's no
reason why others could not be defined. Simply build the file
doctype.format
, where doctype is the root element
that you wish to translate, and format is the format you wish to
translate it to. For example, litprog.html
translates literate program documents (as generated by XLP) in
HTML documents.
Here are some example translation lines:
File | Line | Meaning |
---|---|---|
litprog.html | <itemize><UL> | translate the XML tag itemize to the HTML tag UL |
litprog.html | <itemize/item><LI> | translate the XML tag item to the HTML tag LI when it is a child of the itemize element. |
litprog.tex | <itemize>\begin{itemize} |
translate the XML tag itemize to the TeX command
\begin{itemize} |
litprog.tex | <itemize/item>"\item " |
translate the XML tag item to the TeX command
\item (followed by a space) when it is a child of
the itemize element. |
litprog.html | >t;">" |
translate the entity >t; to a greater than character |
<get document type 9>=
If we don't have an explicit specification of the document type, read the input file, searching for either a processing instruction for this program (axe), or the root element start tag. In the former case, use the attribute, in the latter case, use the root element name as the base name for the translation file.
The document type defines what translation file to read, and is defined by the root element tag. Read the source file to find out the root element type and use that. We collect the tag as far as the first blank or closing angle bracket.
<process a translation line 10>=
On entry to this chunk, $_
contains the current
translation line.
Each input line of the transform file consists of two fields. The first field is mandatory, and is either the xml start or end tag, marked up in angle brackets, or an entity designation. The second field is the target translation, and extends to the end of line.
Text can extend across a new line boundary by escaping the
new line with a \
character, that is, if the last
character on a line is the \
character, another
line is read and appended to the partially constructed line.
Leading blanks on these lines are stripped, while multiple
trailing blanks are reduced to a single blank. This allows
indentation to be stripped to save space, but an essential
separating blank can be inserted by using at least one
trailing blank.
The translation can be a literal, or a piece of code that is
evaluated instead. In the latter case, the code is enclosed
in braces: `{}
'.
The code above does the basic assembly of translation lines.
We check to see if the current line as read ends with a back
slash \
. While it does, we chop it off, replace
multiple trailing blanks with a single one, read the next line
and excise any leading blanks from that, and append the newly
read material to the existing line.
If there is a trailing back slash on the newly read line, the process is repeated.
Finally, we have a fully assembled translation line. Process and save this translation.
<read an element translation 11>=
We have recognized an element translation. Read lines until
we find the matching end tag, barfing if we hit end of file in
the process. The last thing we do is to construct the form
<tag>==translation
as an artifact to assist in
subsequent processing of the translation (see chunk <process and store translation 56>).
<read an entity translation 12>=
<skip blank and comment lines 13>=
Ignore blank or comment lines, and cycle for next line of input.
<define start handler 14>=
Each time the start handler is called, we use the element tag as a key to access the translation table. If no entry is found, a warning message is printed and the tag ignored. Otherwise, the tag is translated into the output document according to the specification in the translation file, now stored in the translation table.
<define end handler 15>=
The handling of end tags is very similar to the handling of start
tags (see <define start handler 14>), except that we define the
translation table key as the concatenation of the end character
/
with the element tag.
<check and resolve for multiple translations 16>=
When multiple translations are given, it means that there are
context sensitive translations. These are indicated by the
translation value being a reference to a table of contexts,
rather than the string value of the translation itself.
Multiple translations are resolved against the context tree,
working our way up the current path. If no match is found, we
check the default context (indicated by a key of just
``~
'').
Ancestor contexts are not properly handled. If an immediate context is not found, the algorithm simply searches all contexts, without taking note of the translation specification. This is currently a kludge. The algorithm used is the following two stage process.
keys(%$translation)
value. The first match
is the correct one to use.
<get full context list 17>=
We get the current context from the Expat variable context. One special case needs attention: if we haven't seen the root element yet, the context is void, so we set the context to the special global context flagged as ~.
<perform the translation 18>=
This is the heart of the translation algorithm. While there
is still some translation to be made, extract the next fragment up
to and including the next .
or end-of-line. There are
several choices: doubly quoted strings (which are evaluated);
singly quoted strings (which are not); code fragments (enclosed in
curly braces, and are evaluated); bare variables
($varname
(evaluated); and bare strings (unquoted,
but must not contain .
characters). Anything else is
flagged as suspect, and an evaluation attempt made. The
fall-through else should never get executed. The outermost else
is there to ensure that the final debug statement doesn't get an
undefined value.
<define attribute substitutions 19>=
This routine is called to replace any strings of the form
@<attr>
or @?<attr>
in the
translation. These are replaced with the actual value of the
attribute attr
in the first case (an error message
is issued if it is not defined in the input source, or the
boolean value 1
or ""
in the second
case, depending upon whether the attribute is defined in the input
source or not respectively.
<check, translate and store a conditional result 20>=
<check and translate a doubly quoted string 21>=
A doubly quoted string is evaluated to interpret nested variables, escaped characters and the like. Note that the inclusion of the non-backslash character means that the string must contain at least one character, hence the alternative pattern to match the null string.
<check and translate a singly quoted fragment 22>=
<check and translate a code fragment 23>=
<check and translate a bare string 24>=
<check and translate a bare attribute 25>=
<check and translate a bare variable 26>=
Note that we do not need to substitute for attribute markers in this case.
<check for suspect translation 27>=
<define char handler 28>=
We rely on newlines coming in one at at time for this to work.
<define char handler 29>=
<define proc handler 30>=
My original intention was to have this handle a processing instruction for AXE. However, the timing of reading the translation file is not quite right, since we have to do that before reading any translations. My current thinking is to use a "just-in-time" reading, but since this involves a little rearrangement of code, I haven't plucked up the courage to do it just yet!
<define default handler 31>=
<define entity handler 32>=
We want to allow translation files to be stored in a range of
places. The environment variable XMLLIB is used to define
the directories to be searched for the target files. If this is not
defined, use ``.'' as default. I use an XMLLIB declaration
of ``.:/home/ajh/lib/xml
''.
<open the translation file 33>=
Translation files may include other translation files. Definitions in the included file are processed as they are read, so they will override existing definitions, and in turn their definitions may be overridden by subsequent definitions, either in-line or in other included files.
<handle included translation file 34>=
<do initializations 35>=
<define translation file routines 36>=
<define translation file routines 37>=
<define translation file routines 38>=
Set the initial value of the include directory to be the current working directory.
On occasions, we do not want to output the element text
directly, but rather buffer it up for later handling. This
code does that. We use a flag variable $divert
to turn on this diversion, and a string variable
$content
to save the element content.
We can also turn output on and off. This is done with the
global flag output_active
, which is turned on and
off with the two routines on_output
and
off_output
.
<do initializations 39>=
<define output active routine 40>=
There are various things we might want to do with white space. One is to ignore newline characters, and translate them to blanks. Another is to collapse multiple sequences of blanks and/or newlines to a single blank. Here are routines to turn these features on and off
<define subroutines 41>=
<do initializations 42>=
<define ignore newlines routine 43>=
There are occasions when we want to ignore newlines in the xml code, and not pass them through. Calling this routine with a TRUE parameter will ignore further newlines until the routine is called again with a FALSE parameter.
<define collapse blanks 44>=
<define subroutines 45>=
<define subroutines 46>=
<define subroutines 47>=
<define attribute handling routines 48>=
missing_attr
is called when there is an
attempt to use an attribute that has not been defined in
the source text.
Besides the included translation files described above,
we also may want to include source files. To do this we
define a support routines nested_source
.
This takes a parameter defining the name of the file to be
included, and parses this file, using the same parsing
parameters as for the main source document.
<define subroutines 49>=
<define subroutines 50>=
In version 3, we keep track of all components of the
translation. The array @element_start
contains
the index of the start point of every currently open element.
When an element is closed, this substring is copied into a
variable of the same name in the AXE
namespace.
This variable is available for use elsewhere, unless
overwritten by an outer or subsequent element with the same
name.
<do initializations 51>=
The variable $CONTENT
accumulates all text fed
to the output
routine.
@element_start
records the starting indices of
all currently open elements.
$element_depth
records the number of currently
open elements.
<define output routine 52>=
The actual output is done by the output
routine,
which buffers all output into the $CONTENT
variable.
<define subroutines 53>=
<define subroutines 54>=
<define subroutines 55>=
Return the source file name as a string. Note that this name is always converted to an absolute pathname.
We want to be able to change the translations of various
elements depending upon the context. For example, the
<item>
tag should translate to the
<LI>
tag in the itemize element, but a
<DD>
tag in the description element.
XSLT uses a context sensitive notation for its translations, so
I'll follow that style for now, at least until I can a good
reason for doing something different. The notation
context/tag
will mean recognize tag
only when it has the immediate parent element
context
, and the notation context//tag
will mean recognize tag
only when it has some
ancestor context
.
For example, the item context might be specified with something like:
<description/item><DD> </description/item></DD> <itemize/item><LI> </itemize/item></LI>
To do this, we need to be able to examine the context of the
current element. The Expat method call context
will do this for us.
We also need a data structure to handle the multiplicity of translations that may be invoked. We will assume that each relevant translation will be called in some (yet to be determined) order. If there is more than one translation for a given tag, we need to build a list of these translations. This is relevant for chunk <process a translation line 10>.
<process and store translation 56>=
Look at the line. We expect it to have the form
<tag>==translation
if it is a tag
translation, or &name;translation
if it is
an entity translation. Match this and check that both
tag
and translation
get specified
correctly. (The double equals is inserted in chunk <read an element translation 11> to simplify the pattern matching
here.)
<process element and translation 57>=
Then strip off a) any material up to and including a
slash if present (this is the context of the
element, field $1
), and b) the element tag
(which must be present, field $2
). Carp if
the latter (field 3) is not found.
<process element and translation 58>=
See if the translation has the standard form
prefix.$tag.
postfix, or the
alternate form
prefix.^^.
postfix. If so,
split it into these parts, then reinsert the
$tag.
at the start of the postfix
tranbslation, to indicate that element content is to be
output at that point.
If the inner pattern is @@
, then no inner
context is emitted, and we do not add the $tag
to the start of the postfix translation.
<inner pattern 59>=
The inner context is defined by one of three forms:
$tag
, with the same interpretation as the
next form. (This form is deprecated, but retained for
compatibility.)^^
, indicating that
inner context should be output at that point.@@
, indicating that
inner context should not be output.<process element and translation 60>=
Split the translation into prefix,content,postfix. If this match is not possible, assume that all the translation is a prefix one.
<define pre/postfix translation 61>=
Store the translation in a hash table indexed by the element
tag. The $context
may be empty, tag/, or tag//.
See chunk <perform the translation 18> for how this is used in
the parse phase.
<process entity and translation 62>=
Handling entity definitions is pretty straightforward. We just
store the translation in an associate array entity
,
indexed by the entity name.
A common task in documents is defining numbered sections and constructing a table of contents. Of particular relevance in the html version is hyperlinking from the table of contents to the sections. The code in this section relates to handling these issues.
<do initializations 63>=
First we define and initialize some data structures. The array
$sectionnos
stores the section number for each
level of nesting, while $sectionlevel
defines the
current depth of sectioning. The array
@sectiontitles
stores the section titles,
initially read in from the AUX file, but updated as each new
section title is processed.
<define subroutines 64>=
enter_section
is called whenever we enter a new
section, subsection, etc.. It checks that we have the proper
sequence of nesting, and starts a new numbering counter.
<define subroutines 65>=
enter_section
is called whenever we detect that a
new section, subsection, subsubsection, etc., is entered. The
parameter passed is the expected nesting level. The
title string is now passed by a separate call to
section_title
. The titles are assembled into an
array of title strings (including the section numbers) for later
use in the table of contents.
<define subroutines 66>=
As we leave each section, all that needs to be done is to updated the section nesting level.
<define subroutines 67>=
Section titles require a two pass algorithm over the source file, since the XML parser only does a single pass. (If we used a tree-driven parser, things might be different here.) The titles are stored in an auxiliary file which is read in at the start of operations, so that it can be output on when the appropriate table of contents command is encountered, which may be before all titles are processed.
As we go, the titles from the aux file are compared with the newly read titles, and an error message issued if they differ.
<read AUX file at start 68>=
Read the AUX file (named as source file with extension changed to .aux) before starting any processing. This allows us to have an array of section titles for use in generating any table of contents.
<define subroutines 69>=
<do finalizations 70>=
Write out the up-to-date list of section titles.
(In order of priority.)
"Makefile" 71=
29 Nov 1999 | John Hurst | 1.0 | initial markup (as "xml") |
30 Nov 1999 | John Hurst | 1.0.1 | move DTD to external |
01 Dec 1999 | John Hurst | 1.0.2 | add tex capability |
01 Dec 1999 | John Hurst | 1.0.3 | add GetOpt |
01 Dec 1999 | John Hurst | 1.0.4 | add element attributes |
09 Dec 1999 | John Hurst | 2.0 | read conversions from .lang file |
09 Dec 1999 | John Hurst | 2.0.1 | allow double quotes around translations |
10 Dec 1999 | John Hurst | 2.0.2 | allow folded lines on translation file input |
13 Dec 1999 | John Hurst | 2.1 | read document type from input document |
14 Dec 1999 | John Hurst | 2.1.1 | ... and improvements on same |
14 Dec 1999 | John Hurst | 2.2 | add library path in env var XMLLIB |
15 Dec 1999 | John Hurst | 2.2.1 | minor changes and tidy ups |
16 Dec 1999 | John Hurst | 2.2.2 | entity handling |
21 Dec 1999 | John Hurst | 2.2.3 | divert element contents |
24 Dec 1999 | John Hurst | 2.2.4 | ... and in start/end handlers, too |
04 Jan 2000 | John Hurst | 2.2.5 | add ignore CR routine |
04 Jan 2000 | John Hurst | 2.2.6 | skip blank lines in translation file, warn if no translation encountered |
05 Jan 2000 | John Hurst | 2.2.7 | add debug option |
09 Jan 2000 | John Hurst | 2.3.0 | convert to XML lit prog |
13 Jan 2000 | John Hurst | 2.4.0 | add context sensitive translation |
14 Jan 2000 | John Hurst | 2.4.1 | add entity handling |
15 Jan 2000 | John Hurst | 2.4.2 | define entity translations in translation file |
25 Jan 2000 | John Hurst | 2.5.0 | add internal section numbers, titles, and table of contents |
25 Jan 2000 | John Hurst | 2.5.1 | revise extraction of document type |
06 Mar 2000 | John Hurst | 2.6.0 | revise section interface procedures |
11 Mar 2000 | John Hurst | 2.7.0 | change XFR file to $xfhandle FileHandle |
12 Mar 2000 | John Hurst | 2.7.1 | add "include file" to translations file |
19 Mar 2000 | John Hurst | 2.7.2 | rename as AXE |
19 Mar 2000 | John Hurst | 2.8.0 | read source from file handle to allow source inclusions |
20 Mar 2000 | John Hurst | 2.8.1 | keep blanks on end of escaped line |
24 Mar 2000 | John Hurst | 2.9.0 | (debug) keep blanks on end of escaped line, change translation file trace, add identifier hot links and cross referencing |
25 Mar 2000 | John Hurst | 2.9.1 | additions to User Manual |
26 Mar 2000 | John Hurst | 2.9.2 | modify translation data structures and lookup; add simple attribute matching to tags |
02 Apr 2000 | John Hurst | 2.9.3 | updated debugging trace |
11 Apr 2000 | John Hurst | 3.0.0 | revise translation file format |
12 Apr 2000 | John Hurst | 3.0.1 | new translation file format bug fixes |
13 Apr 2000 | John Hurst | 3.0.2 | move translation script variables into separate name space |
17 Apr 2000 | John Hurst | 3.0.3 | update User Manual |
18 Apr 2000 | John Hurst | 3.1.0 | revise trans file format (again) |
19 Apr 2000 | John Hurst | 3.1.1 | add attribute translation and check |
20 Apr 2000 | John Hurst | 3.1.2 | documentation updates, and add conditional translations |
22 Apr 2000 | John Hurst | 3.1.3 | add subroutine call when attributes undefined |
26 Apr 2000 | John Hurst | 3.1.4 | fixed bug with variable setting, added warning about inner content using variable names |
28 Apr 2000 | John Hurst | 3.2.0 | modified include file handling (needs more work still). |
04 May 2000 | John Hurst | 3.2.1 | improved translation file handling. |
12 May 2000 | John Hurst | 3.2.2 | fix bug in output of evaluation trace. |
16 May 2000 | John Hurst | 3.2.3 | add current element and interface |
27 May 2000 | John Hurst | 3.2.4 | add source file name interface |
15 Jun 2000 | John Hurst | 3.2.5 | add ignore blanks interface |
17 Jun 2000 | John Hurst | 3.3.0 | aux file not written if empty |
18 Sep 2000 | John Hurst | 3.3.1 | add PI to define translation file |
20 Nov 2000 | John Hurst | 3.4.0 | hacked ancestor translation support -- needs more work |
21 Nov 2000 | John Hurst | 3.4.1 | fixed bug -- still needs more work |
AUX
<68>, <70>, <68>, <5>.auxfile
<68>, <70>, <68>.collapseblnk
<42>, <44>, <42>, <29>.CONTENT
<51>, <70>, <53>, <52>, <51>, <29>.element_depth
<51>, <53>, <51>.element_start
<51>, <53>, <51>.end
<53>, <15>, <8>.end_handler
<15>, <49>, <15>, <7>.enter_section
<64>, <65>.exit_section
<66>.ignorenl
<42>, <43>, <42>, <28>.ignore_newlines
<43>.missing_attr
<48>, <19>.nested_source
<49>.nested_target
<50>.off_output
<40>.on_output
<40>.parser
<7>, <49>, <7>.section_string
<67>, <65>.section_title
<65>.sourcefile
<5>, <68>, <55>, <9>, <7>, <5>.sourcehandle
<7>, <50>, <49>, <7>.start
<53>, <61>, <60>, <58>, <53>, <23>, <18>, <15>, <14>, <8>, <5>.start_handler
<14>, <49>, <14>, <7>.subs_attr
<19>, <27>, <25>, <24>, <23>, <22>, <21>, <19>.transtable
<6>, <61>, <15>, <14>, <6>.xfhandle
<6>, <39>, <38>, <37>, <36>, <12>, <11>.xflev
<6>, <38>, <37>, <36>, <35>, <12>, <11>, <6>.