Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
Added support for horizontal spacing in LaTeX
|
|
Closes #3667.
|
|
space)
|
|
Support for the `#+INCLUDE:` file inclusion mechanism was added.
Recognized include types are *example*, *export*, *src*, and normal org
file inclusion. Advanced features like line numbers and level selection
are not implemented yet.
Closes: #3510
|
|
The `insertIncludeFiles` function was generalized and renamed to
`insertIncludedFiles'`; the specialized versions are based on that.
|
|
The `insertIncludeFile` function is generalized to work with all parser
states which are instances of that class.
|
|
Calling `tail` on an empty list raises an exception, while calling the
otherwise equivalent `drop 1` will return the empty list again.
|
|
This follows the suggestions given by the FSF for GPL licensed software.
<https://www.gnu.org/prep/maintain/html_node/Copyright-Notices.html>
|
|
Copyright, maintainer etc. were missing in haddock docs for this module.
|
|
|
|
|
|
The grid table parsers for markdown and rst was combined into one single
parser, slightly changing parsing behavior of both parsers:
- The markdown parser now compactifies block content cell-wise: pure
text blocks in cells are now treated as paragraphs only if the cell
contains multiple paragraphs, and as plain blocks otherwise. Before,
this was true only for single-column tables.
- The rst parser now accepts newlines and multiple blocks in header
cells.
Closes: #3638
|
|
|
|
It is required to trigger Muse table rendering.
|
|
|
|
This reduces code duplication.
We should be able to do something similar in ODT, Docx, EPUB writers.
|
|
Also generalized type of fillMedia to any instance of PandocMonad.
|
|
Closes #3646.
|
|
Supporting two completely different libraries for fetching
from URLs makes it difficult to trap errors, because of
different error types expected from the libraries.
There's no clear reason not to build with these https-capable
libraires.
|
|
Report warning instead and change image to its alt text.
|
|
If `--extract-media` is supplied with a non-binary input format,
pandoc will attempt to extract the contents of all linked images,
whether in local files, data: uris, or external uris.
They will be named based on the sha1 hash of the contents.
Closes #1583, #2289.
Notes:
- One thing that is slightly subideal with this commit is that
identical resources will be downloaded multiple times. To improve
this we could have mediabag store an original filename/url +
a new name.
- We might think about reusing some of this code, since more or less the
same thing is done in the Docx, EPUB, PDF writers (with slight
variations).
|
|
|
|
Previously we inadvertently interpreted indented HTML as
code blocks. This was a regression.
We now seek to determine the indentation level of the contents
of an HTML block, and (optionally) skip that much indentation.
As a side effect, indentation may be stripped off of raw
HTML blocks, if `markdown_in_html_blocks` is used. This
is better than having things interpreted as indented code
blocks.
Closes #1841.
|
|
This solves a problem with commented out `\end{eqnarray}` inside
an eqnarray (among other things).
Closes #3113.
|
|
* Fix keyval funtion: pandoc did not parse options in braces correctly. Additionally, dot, dash, and colon were no valid characters
* Add | as possible option value
* Improved code
|
|
This was left in accidentally.
|
|
Closes: #3401
|
|
This reverts commit 89b3fcc8e050def3779fed716d70bfd4e7120a6b.
|
|
We now avoid creating a data URI for the url under an
@import.
|
|
This can happen e.g. with an @import of a google web font.
(What is imported is some CSS which contains an url reference
to the font itself.)
Also, allow unescaped pipe (|) in URL.
This is intended to help with #3629, but it doesn't seem to
work.
|
|
|
|
Closes #3637.
|
|
|
|
Closes #3314
|
|
|
|
|
|
The parsing functions `tableWith` and `gridTableWith` are generalized to
work with more parsers. The parser state only has to be an instance of
the `HasOptions` class instead of requiring a concrete type. Block
parsers are required to return blocks wrapped into a monad, as this
makes it possible to use parsers returning results wrapped in `Future`s.
|
|
Previously the Markdown writer would sometimes create links where there
were none in the source. This is now avoided by selectively escaping bracket
characters when they occur in a place where a link might be created.
Closes #3619.
|
|
|
|
Fixes #3630 (#3631).
Previously the attributes in link reference definitions did not have a space preceding.
|
|
Use this instead of PandocIOError when a resource is not
found in path.
This improves the error message in this case, see #3629.
|
|
|
|
Ensure that we do not generate reference links
whose labels differ only by case.
Also allow implicit reference links when the link
text and label are identical up to case.
Closes #3615.
|
|
|
|
The implicitly defined global filter (i.e. all element filtering
functions defined in the global lua environment) is used if no filter is
returned from a lua script. This allows to just write top-level
functions in order to define a lua filter. E.g
function Emph(elem) return pandoc.Strong(elem.content) end
|
|
Attributes was written to behave much like a normal table, in order to
simplify working with it. However, all Attr containing elements were
changed to provide panflute-like accessors to Attr components, rendering
the previous approach unnecessary.
|
|
Attributes are always passed as the last element, making it possible to
omit this argument. Argument order for `Header` was wrong and is fixed.
|
|
The `F` monads used for delayed evaluation of certain values in the
Markdown and Org readers are based on a shared data type capturing the
common pattern of both `F` types.
|
|
|