Age | Commit message (Collapse) | Author | Files | Lines |
|
Closes #5010.
Expose trimMath from T.P.Shared.
|
|
|
|
|
|
|
|
|
|
Closes #4637.
|
|
This fixes #4561, a bug parsing emphasized bare links in RST.
|
|
|
|
We previously accepted 'DDC' as 1100. Closes #4480.
|
|
This seems to be necessary if we are to use our custom Prelude
with ghci.
Closes #4464.
|
|
|
|
|
|
And a few tweaks related to the Semigroups/Monoid change.
Closes #4448.
|
|
|
|
|
|
|
|
|
|
The change both improves performance and fixes a
regression whereby normal citations inside inline notes
were not parsed correctly.
Closes jgm/pandoc-citeproc#315.
|
|
|
|
Previously `\( \frac{1}{a} < \frac{1}{b} \)` was not parsed as math
in `markdown` or `html` `+tex_math_single_backslash`.
|
|
This fixes a bug where pandoc would stop parsing a URI with an
empty attribute: for example, `&a=&b=` wolud stop at `a`.
(The uri parser tries to guess which punctuation characters
are part of the URI and which might be punctuation after it.)
Closes #4068.
|
|
|
|
|
|
+ Added Ext_fenced_divs to Extensions (default for pandoc Markdown).
+ Document fenced_divs extension in manual.
+ Implemented fenced code divs in Markdown reader.
+ Added test.
Closes #168.
|
|
Previously pandoc would sometimes combine two line blocks separated by blanks, and ignore trailing blank lines within the line block.
Test is checked to be consisted with http://rst.ninjs.org/
|
|
Closes #3511.
Previously pandoc used the four-space rule: continuation paragraphs,
sublists, and other block level content had to be indented 4
spaces. Now the indentation required is determined by the
first line of the list item: to be included in the list item,
blocks must be indented to the level of the first non-space
content after the list marker. Exception: if are 5 or more spaces
after the list marker, then the content is interpreted as an
indented code block, and continuation paragraphs must be indented
two spaces beyond the end of the list marker. See the CommonMark
spec for more details and examples.
Documents that adhere to the four-space rule should, in most cases,
be parsed the same way by the new rules. Here are some examples
of texts that will be parsed differently:
- a
- b
will be parsed as a list item with a sublist; under the four-space
rule, it would be a list with two items.
- a
code
Here we have an indented code block under the list item, even though it
is only indented six spaces from the margin, because it is four spaces
past the point where a continuation paragraph could begin. With the
four-space rule, this would be a regular paragraph rather than a code
block.
- a
code
Here the code block will start with two spaces, whereas under
the four-space rule, it would start with `code`. With the four-space
rule, indented code under a list item always must be indented eight
spaces from the margin, while the new rules require only that it
be indented four spaces from the beginning of the first non-space
text after the list marker (here, `a`).
This change was motivated by a slew of bug reports from people
who expected lists to work differently (#3125, #2367, #2575, #2210,
#1990, #1137, #744, #172, #137, #128) and by the growing prevalance
of CommonMark (now used by GitHub, for example).
Users who want to use the old rules can select the `four_space_rule`
extension.
* Added `four_space_rule` extension.
* Added `Ext_four_space_rule` to `Extensions`.
* `Parsing` now exports `gobbleAtMostSpaces`, and the type
of `gobbleSpaces` has been changed so that a `ReaderOptions`
parameter is not needed.
|
|
This is a utility function to use in list parsing.
|
|
|
|
This reverts commit e22dc98a70d030cc6b4056d14ddd6462c7790f97.
|
|
(Unnecessary type constraints.)
|
|
|
|
This rewrite is primarily motivated by the need to
get macros working properly. A side benefit is that the
reader is significantly faster (27s -> 19s in one
benchmark, and there is a lot of room for further
optimization).
We now tokenize the input text, then parse the token stream.
Macros modify the token stream, so they should now be effective
in any context, including math. Thus, we no longer need the clunky
macro processing capacities of texmath.
A custom state LaTeXState is used instead of ParserState.
This, plus the tokenization, will require some rewriting
of the exported functions rawLaTeXInline, inlineCommand,
rawLaTeXBlock.
* Added Text.Pandoc.Readers.LaTeX.Types (new exported module).
Exports Macro, Tok, TokType, Line, Column. [API change]
* Text.Pandoc.Parsing: adjusted type of `insertIncludedFile`
so it can be used with token parser.
* Removed old texmath macro stuff from Parsing.
Use Macro from Text.Pandoc.Readers.LaTeX.Types instead.
* Removed texmath macro material from Markdown reader.
* Changed types for Text.Pandoc.Readers.LaTeX's
rawLaTeXInline and rawLaTeXBlock. (Both now return a String,
and they are polymorphic in state.)
* Added orgMacros field to OrgState. [API change]
* Removed readerApplyMacros from ReaderOptions.
Now we just check the `latex_macros` reader extension.
* Allow `\newcommand\foo{blah}` without braces.
Fixes #1390.
Fixes #2118.
Fixes #3236.
Fixes #3779.
Fixes #934.
Fixes #982.
|
|
Previously positions would be reported past the end of the chunk.
We now reset the source position within the chunk and report
positions "in chunk."
|
|
By not checking for the end condition before the first parse, the
parser was applied too often, consuming too much of the input.
This fixes the behaviour of
`testStringWith (many1Till (oneOf "ab") (string "aa")) "aaa"`
which before incorrectly returned `Right "a"`. With this change, it
instead correctly fails with `Left (PandocParsecError ...)` because it
is not able to parse at least one occurence of `oneOf "ab"` that is
not `"aa"`.
Note that this only affects `many1Till p end` where `p` matches on a
prefix of `end`.
|
|
Closes #1718.
Parsing.ParserState: Make stateNotes' a Map, add stateNoteRefs.
|
|
This is a verison of parseFromString specialied to
ParserState, which resets stateLastStrPos at the end.
This is almost always what we want.
This fixes a bug where `_hi_` wasn't treated as emphasis in
the following, because pandoc got confused about the
position of the last word:
- [o] _hi_
Closes #3690.
|
|
We also export the set of known `schemes`.
The new function replaces the function of the same name
from `Network.URI`, as the latter did not check whether a scheme is
well-known. E.g. MediaWiki wikis frequently feature pages with names
like `User:John`. These links were interpreted as URIs, thus turning
internal links into global links. This is prevented by also checking
whether the scheme of a URI is frequently used (i.e. is IANA registered
or an otherwise well-known scheme).
Fixes: #2713
Update set of well-known URIs from IANA list
All official IANA schemes (as of 2017-05-22) are included in the set of
known schemes. The four non-official schemes doi, isbn, javascript, and
pmid are kept.
|
|
|
|
Move anyLineNewline to Parsing.hs
|
|
|
|
The `insertIncludeFiles` function was generalized and renamed to
`insertIncludedFiles'`; the specialized versions are based on that.
|
|
The `insertIncludeFile` function is generalized to work with all parser
states which are instances of that class.
|
|
Calling `tail` on an empty list raises an exception, while calling the
otherwise equivalent `drop 1` will return the empty list again.
|
|
This follows the suggestions given by the FSF for GPL licensed software.
<https://www.gnu.org/prep/maintain/html_node/Copyright-Notices.html>
|
|
The grid table parsers for markdown and rst was combined into one single
parser, slightly changing parsing behavior of both parsers:
- The markdown parser now compactifies block content cell-wise: pure
text blocks in cells are now treated as paragraphs only if the cell
contains multiple paragraphs, and as plain blocks otherwise. Before,
this was true only for single-column tables.
- The rst parser now accepts newlines and multiple blocks in header
cells.
Closes: #3638
|
|
The parsing functions `tableWith` and `gridTableWith` are generalized to
work with more parsers. The parser state only has to be an instance of
the `HasOptions` class instead of requiring a concrete type. Block
parsers are required to return blocks wrapped into a monad, as this
makes it possible to use parsers returning results wrapped in `Future`s.
|
|
The `F` monads used for delayed evaluation of certain values in the
Markdown and Org readers are based on a shared data type capturing the
common pattern of both `F` types.
|
|
This avoids parsing bare URIs that start with a scheme
+ colon + `*`, `_`, or `]`.
Closes #3570.
|
|
Closes #1905.
Removed stateChapters from ParserState.
Now we parse chapters as level 0 headers, and parts as level -1 headers.
After parsing, we check for the lowest header level, and if it's
less than 1 we bump everything up so that 1 is the lowest header level.
So `\part` will always produce a header; no command-line options
are needed.
|
|
As noted in the previous commit, an autogenerated identifier
may still coincide with an explicit identifier that is given
for a header later in the document, or with an identifier on
a div, span, link, or image. This commit adds a warning
in this case, so users can supply an explicit identifier.
* Added `DuplicateIdentifier` to LogMessage.
* Modified HTML, Org, MediaWiki readers so their custom
state type is an instance of HasLogMessages. This is necessary
for `registerHeader` to issue warnings.
See #1745.
|