Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
Use "> " instead of <verse> tag
|
|
|
|
|
|
Instead of writing my own.
|
|
These were never added when the tests were first created.
Output files checked in MS PowerPoint 2013 (Windows 10, VBox). No
corruption, and output as expected.
|
|
Make sure there are no empty slides in the pptx output. Because of the
way that slides were split, these could be accidentally produced by
comments after images.
When animations are added, there will be a way to add an empty slide
with either incremental lists or pauses.
Test outputs checked with MS PowerPoint (Office 2013, Windows 10,
VBox). Both files have expected output and are not corrupted.
|
|
|
|
|
|
|
|
The name of the Lua script which is executed is made available in the
global Lua variable `PANDOC_SCRIPT_FILE`, both for Lua filters and
custom writers.
Closes: #4393
|
|
|
|
|
|
Now verse marked up with ">" (in contrast to <verse> tag) can be placed
inside lists.
|
|
|
|
The characters allowed before and after emphasis can be configured via
`#+pandoc-emphasis-pre` and `#+pandoc-emphasis-post`, respectively. This
allows to change which strings are recognized as emphasized text on a
per-document or even per-paragraph basis. The allowed characters must be
given as (Haskell) string.
#+pandoc-emphasis-pre: "-\t ('\"{"
#+pandoc-emphasis-post: "-\t\n .,:!?;'\")}["
If the argument cannot be read as a string, the default value is
restored.
Closes: #4378
|
|
Modify the PowerPoint tests to run all the tests with
template (--reference-doc) as well. Because there are so many
interlocking pieces, bugs can pop up in weird places when using
templates, since it changes how the writer builds its output
file.
For example, I recently discovered a bug in which speaker notes worked
fine and templating worked fine elsewhere, but templating with speaker
notes produced a file that would crash MS PowerPoint. That particular
bug was fixed, but this will forces us to check for that with each new
change.
|
|
|
|
|
|
This is to make sure "i." starts a roman numbered list,
instead of a list with letter "i" (followed by "j", "k", ...").
|
|
This change is intended to preserve as much of the table content as
possible
Closes #4360
|
|
This fixes bugs introduced in commit 4bfab8f04c105f111d8d4e1c3ed7f7b5c75dbd19.
|
|
|
|
|
|
Lists are parsed in linear instead of exponential time now.
Contents of block tags, such as <quote>, is parsed directly,
without storing it in a string and parsing with parseFromString.
Fixed a bug: headers did not terminate lists.
|
|
Muse allows indentation to indicate quotation or alignment,
but only on the top level, not within a <quote> or list.
This patch also simplifies the code by removing museInQuote
and museInList fields from the state structure.
Headers and indented paragraphs are attempted to be parsed
only at the topmost level, instead of aborting parsing with guards.
|
|
|
|
Text::Amuse already explicitly requires it anyway.
Supporting block tags on the same line as contents makes
it hard to combine closing tag parsers with indentation parsers.
Being able to combine parsers is required for no-reparsing refactoring
of Muse reader.
|
|
Unlike paragraph and <quote> tag parsers, verse parser consumes newline.
For this reason only three or more blank lines can separate list items.
|
|
blockquote
Existing tests only checked this for paragraphs.
|
|
|
|
<verse> is a block tag and displayMath is an inline element.
Writing <verse> around displayMath could result in nested
<verse> tags.
|
|
|
|
Newline after whitespace now results in softbreak
instead of space.
|
|
|
|
|
|
These are based off the reader tests, with some removed (where the
reader output was identical, based on different docx inputs). There
are still more to be added. In particular, tests for custom-styles
need to be added.
All golden docx files have been checked in MS Word
2013 (windows). There is no corruption.
There is questionable output in the `tables` test: the three tables
seemed to be joined. This will be addressed in a future commit, and
the golden docx file will be changed.
|
|
There is very little pptx-specific in these tests, so we abstract out
the basic testing function so it can be used for docx as well. This
should allow us to catch some errors in the docx writer that slipped
by the roundtrip testing.
|
|
Fixes #2609.
This PR introduces the new-style section headings: `\section[my-header]{My Header}` -> `\section[title={My Header},reference={my-header}]`.
On top of this, the ConTeXt writer now supports the `--section-divs` option to write sections in the fenced style, with `\startsection` and `\stopsection`.
|
|
|
|
|
|
|
|
Powerpoint output checked in MS PowerPoint 2013 (Windows)
|
|
Tests added for:
- table of contents
- endnotes
- endnotes with table of contents
Powerpoint output checked in MS PowerPoint 2013 (Windows)
|
|
We had previously re-read the native file and converted it to
Powerpoint. But we have already done that in constructing the test
archive. So now we just convert the archive back to a bytestring and
write it to disk.
|
|
This will allow us to rebuild the pptx files in the test dir more
easily if we make a change in the writer.
|
|
|
|
|
|
Previously we had tested certain properties of the output PowerPoint
slides. Corruption, though, comes as the result of a numebr of
interrelated issues in the output pptx archive. This is a new
approach, which compares the output of the Powerpoint writer with
files that we know to (a) not be corrupt, and (b) to show the desired
output behavior (details below). This commit introduces three tests
using the new framework. More will follow.
The test procedure: given a native file and a pptx file, we generate a
pptx archive from the native file, and then test:
1. Whether the same files are in the two archives
2. Whether each of the contained xml files is the same. (We skip time
entries in `docProps/core.xml`, since these are derived from IO. We
just check to make sure that they're there in the same way in both
files.)
3. Whether each of the media files is the same.
Note that steps 2 and 3, though they compare multiple files, are one
test each, since the number of files depends on the input file (if
there is a failure, it will only report the first failed file
comparison in the test failure).
|
|
|