Age | Commit message (Collapse) | Author | Files | Lines |
|
Change is in rawLaTeXInline in LaTeX reader, but
it affects the markdown reader and other readers
that allow raw LaTeX.
Previously, trailing `{}` would be included for
unknown commands, but not for known commands.
However, they are sometimes used to avoid a trailing
space after the command. The chances that a `{}`
after a LaTeX command is not part of the command
are very small.
Closes #5439.
|
|
Closes #5545.
|
|
Closes #5549.
|
|
- With epub extensions, check for epub:type in addition to type.
- Fix problem with noteref parsing which caused block-level
content to be eaten with the noteref.
- Rename pAnyTag to pAny.
- Refactor note resolution.
|
|
add `onlySimpleTableCells` to `Text.Pandoc.Shared`
[API change]
This fixes an inconsistency in the HTML reader, which did not treat tables with `<p>` inside cells as simple.
|
|
|
|
|
|
Planning info is now always placed before the subtree contents.
Previously, the planning info was placed after the content if the
header's subtree was converted to a list, which happens with headers of
level 3 and higher per default.
Fixes: #5494
|
|
Unknown export options are properly ignored and omitted from the output.
|
|
Closes #5493
|
|
|
|
|
|
|
|
Symbols like `\alpha` are output plain and unemphasized, not as math.
Fixes: #5483
|
|
Fixes: #5484
|
|
|
|
|
|
These seem to be needed for xelatex but not pdflatex.
Closes #5441.
|
|
1) Don't append `.html`
2) Add `wikilink` title
This mirrors behavior of other wiki readers. Generally the
`.html` extension is not wanted. It may be important for
output to HTML in certain circumstances, but it can always
be added using a filter that matches on links with title
`wikilink`.
Note that if you have a workflow that uses pandoc to convert
vimwiki to readable HTML pages, you may need to add such a
filter to reproduce current behavior.
Here is a filter that does the job:
```lua
function Link(el)
if el.title == 'wikilink' then
el.target = el.target .. ".html"
end
return el
end
```
Save this as `fixlinks.lua` and use with `--lua-filter fixlinks.lua`.
Closes #5414.
|
|
fixes #5416
|
|
According to nbformat docs, this is supposed to render in every
format. We don't do that, but we at least preserve it as a raw
block in markdown, so you can round-trip.
|
|
For
::: {.cell}
---
:::
|
|
The nbformat spec says that when no format is specified,
the raw cell will be rendered in every markup format.
Pandoc doesn't have a construct that works this way,
so we just fall back to `html`.
|
|
|
|
|
|
The HTML writer adds the `data-` prefix for HTML5
for nonstandard attributes. But the attributes are
represented in the AST without the `data-` prefix,
so we should strip this when reading HTML.
Closes #5392.
|
|
Previously they would sometimes not work: e.g., when they
occured in final paragraphs in lists that were originally
parsed as Plain and converted later using PlainToPara.
Closes #5368.
|
|
These are parsed as a Span with class `underline`, as with other readers.
|
|
|
|
We now include every output format. Pruning is handled by
`--ipynb-output=`.
|
|
We now handle even complex cell metadata in the Div's attributes.
Simple metadata fields are rendered as a plain string, and complex ones
as JSON.
|
|
|
|
|
|
The haddock module header contains essentially the
same information, so the boilerplate is redundant and
just one more thing to get out of sync.
|
|
Add ReaderOptions parameter to yamlToMeta [API change].
fixes #5272
|
|
This ensures that a figure containing a single image
is parsed as a pandoc "implicit figure" (i.e., a
Para with a single Image whose title attribute begins
with `fig:`). More complex figures will still be parsed
as divs.
Closes #5321.
|
|
This module is one of the most opaque parts of the docx reader: it
deals with the fact that runs have non-nesting formatting, so we have
to figure out the nesting on the fly as we combine them.
We start adding commenting, so new developers can understand and, if
necessary, modify this module. Specific function comments will be
added in the future, but this offers a global description of the
purpose of the module.
|
|
We have to add one final mempty when we're combining in order to trim
inlines appropriately. (We need to use our own trimming routines here
due to the way that formatted inlines are smushed together when
converting from docx.)
Closes #5273
|
|
|
|
|
|
Previously parsing would break if the code block
contained a string of backticks of sufficient length
followed by something other than end of line.
Closes #5304.
|
|
The rid attribute can have a space-separated list of ids.
Closes #5310.
|
|
We had previously walked the document to unwrap sdt/sdtContent and
smartTag tags in `word/document.xml`, but not in the
`word/{foot/end}note.xml` and `word/comments.xml`.
Closes #5302
|
|
|
|
even if a richer format is included.
We don't know what output format will be needed.
The fallback can always be weeded out using a filter.
Closes #5293.
|
|
see #5272
|
|
Some paths in archives are absolute (have an opening slash) which, for
reasons unknown, produces a failure in the test suite on MS
Windows. This fixes that by removing the leading slash if it exists.
Closes #5277 (previously closed with 4cce0ef but reopened due to this bug).
|
|
This reverts commit 2142bbe572cea00b7bb5ad3e10a3afb26845a1f7.
|
|
Try fixing a parsing error on windows by insisting that the parser use
a Posix filepath library for splitting doc paths in a zipfile. (It
might default on Windows to using a backslash as a separator, while
it's always a forward-slash in zip archives.)
|
|
* clarify function name. We had previously used `getDocumentPath`,
but `Document` is an overdetermined term here. Use
`getDocumentXmlPath` to make clear what we're doing.
* Use field notation for setting ReaderEnv. As we've added (and
continue to add) fields, the assignment by position has gotten
harder to read.
* figure out document.xml path once at the beginning of parsing, and
add it to the environment, so we can avoid repeated lookups.
|