Age | Commit message (Collapse) | Author | Files | Lines |
|
Some fields only have an instrText and no content, Pandoc didn't
understand these, causing other fields to be misunderstood because it
seemed like a field was still open when it wasn't.
|
|
Fields delimited by fldChar elements can contain other fields. Before,
the nested fields would be ignored, except for the end, which would be
considered the end of the parent field.
To fix this issue, fields needed to be considered containing ParParts
instead of Runs, since a Run can't represent complex enough structures.
This also impacted Hyperlinks since they can originate from a field.
|
|
When a paragraph has an indentation different from the parent (named)
style, it used to be considered a blockquote. But this only makes sense
when the paragraph has more indentation. So this commit adds a check
for the indentation of the parent style.
|
|
|
|
The docx reader made a couple assumptions about how docx
containers were laid out that were not always true, with
the result that some images in documents did not get
found/extracted.
Closes #7511.
|
|
Closes #7374.
|
|
* Column spans
* Row spans
- The spec says that if the `val` attribute is ommitted, its value
should be assumed to be `continue`, and that its values are
restricted to {`restart`, `continue`}. If the value has any other
value, I think it seems reasonable to default it to `continue`. It
might cause problems if the spec is extended in the future by adding
a third possible value, in which case this would probably give
incorrect behaviour, and wouldn't error.
* Allow multiple header rows
* Include table description in simple caption
- The table description element is like alt text for a table (along
with the table caption element). It seems like we should include
this somewhere, but I’m not 100% sure how – I’m pairing it with the
simple caption for the moment. (Should it maybe go in the block
caption instead?)
* Detect table captions
- Check for caption paragraph style /and/ either the simple or
complex table field. This means the caption detection fails for
captions which don’t contain a field, as in an example doc I added
as a test. However, I think it’s better to be too conservative: a
missed table caption will still show up as a paragraph next to the
table, whereas if I incorrectly classify something else as a table
caption it could cause havoc by pairing it up with a table it’s
not at all related to, or dropping it entirely.
* Update tests and add new ones
Partially fixes: #6316
|
|
|
|
They represent images, the same way as other images in vml format.
|
|
This gives a speedup of about 5-10%.
The reader is now approximately twice as fast as in the last
release.
|
|
..and add new definitions isomorphic to xml-light's, but with
Text instead of String. This allows us to keep most of the code in
existing readers that use xml-light, but avoid lots of unnecessary
allocation.
We also add versions of the functions from xml-light's
Text.XML.Light.Output and Text.XML.Light.Proc that operate
on our modified XML types, and functions that convert
xml-light types to our types (since some of our dependencies,
like texmath, use xml-light).
Update golden tests for docx and pptx.
OOXML test: Use `showContent` instead of `ppContent` in `displayDiff`.
Docx: Do a manual traversal to unwrap sdt and smartTag.
This is faster, and needed to pass the tests.
Benchmarks:
A = prior to 8ca191604dcd13af27c11d2da225da646ebce6fc (Feb 8)
B = as of 8ca191604dcd13af27c11d2da225da646ebce6fc (Feb 8)
C = this commit
| Reader | A | B | C |
| ------- | ----- | ------ | ----- |
| docbook | 18 ms | 12 ms | 10 ms |
| opml | 65 ms | 62 ms | 35 ms |
| jats | 15 ms | 11 ms | 9 ms |
| docx | 72 ms | 69 ms | 44 ms |
| odt | 78 ms | 41 ms | 28 ms |
| epub | 64 ms | 61 ms | 56 ms |
| fb2 | 14 ms | 5 ms | 4 ms |
|
|
This exports functions that uses xml-conduit's parser to
produce an xml-light Element or [Content]. This allows
existing pandoc code to use a better parser without
much modification.
The new parser is used in all places where xml-light's
parser was previously used. Benchmarks show a significant
performance improvement in parsing XML-based formats
(especially ODT and FB2).
Note that the xml-light types use String, so the
conversion from xml-conduit types involves a lot
of extra allocation. It would be desirable to
avoid that in the future by gradually switching
to using xml-conduit directly. This can be done
module by module.
The new parser also reports errors, which we report
when possible.
A new constructor PandocXMLError has been added to
PandocError in T.P.Error [API change].
Closes #7091, which was the main stimulus.
These changes revealed the need for some changes
in the tests. The docbook-reader.docbook test
lacked definitions for the entities it used; these
have been added. And the docx golden tests have been
updated, because the new parser does not preserve
the order of attributes.
Add entity defs to docbook-reader.docbook.
Update golden tests for docx.
|
|
For security reasons, some legal firms delete the date from comments and
tracked changes.
* Make date optional (Maybe) in tracked changes and comments datatypes
* Add tests
|
|
* Fix hlint suggestions, update hlint.yaml
Most suggestions were redundant brackets. Some required
LambdaCase.
The .hlint.yaml file had a small typo, and didn't ignore camelCase
suggestions in certain modules.
|
|
Fixes #6514
|
|
* Use implicit Prelude
The previous behavior was introduced as a fix for #4464. It seems that
this change alone did not fix the issue, and `stack ghci` and `cabal
repl` only work with GHC 8.4.1 or newer, as no custom Prelude is loaded
for these versions. Given this, it seems cleaner to revert to the
implicit Prelude.
* PandocMonad: remove outdated check for base version
Only base versions 4.9 and later are supported, the check for
`MIN_VERSION_base(4,8,0)` is therefore unnecessary.
* Always use custom prelude
Previously, the custom prelude was used only with older GHC versions, as
a workaround for problems with ghci. The ghci problems are resolved by
replacing package `base` with `base-noprelude`, allowing for consistent
use of the custom prelude across all GHC versions.
|
|
* Update copyright year
* Copyright: add notes for Lua and Jira modules
|
|
* Use concatMap instead of reimplementing it
* Replace an unnecessary multi-way if with a regular if
* Use sortOn instead of sortBy and comparing
* Use guards instead of lots of indents for if and else
* Remove redundant do blocks
* Extract common functions from both branches of maybe
Whenever both the Nothing and the Just branch of maybe do the same
function, do that function on the result of maybe instead.
* Use fmap instead of reimplementing it from maybe
* Use negative forms instead of negating the positive forms
* Use mapMaybe instead of mapping and then using catMaybes
* Use zipWith instead of mapping over the result of zip
* Use unwords instead of reimplementing it
* Use <$ instead of <$> and const
* Replace case of Bool with if and else
* Use find instead of listToMaybe and filter
* Use zipWithM instead of mapM and zip
* Inline lambda wrappers into the real functions
* We get zipWithM from Text.Pandoc.Writers.Shared
* Use maybe instead of fromMaybe and fmap
I'm not sure how this one slipped past me.
* Increase a bit of indentation
|
|
PR #5884.
+ Use pandoc-types 1.20 and texmath 0.12.
+ Text is now used instead of String, with a few exceptions.
+ In the MediaBag module, some of the types using Strings
were switched to use FilePath instead (not Text).
+ In the Parsing module, new parsers `manyChar`, `many1Char`,
`manyTillChar`, `many1TillChar`, `many1Till`, `manyUntil`,
`mantyUntilChar` have been added: these are like their
unsuffixed counterparts but pack some or all of their output.
+ `glob` in Text.Pandoc.Class still takes String since it seems
to be intended as an interface to Glob, which uses strings.
It seems to be used only once in the package, in the EPUB writer,
so that is not hard to change.
|
|
The left-to-right direction setting in docx is used in the spec only
for overriding an explicit right-to-left setting. We only process it
when it happens in a paragraph set with BiDi.
This is especially important for docs exported from Google Docs, which
explicitly (and unnecessarily) set "rtl=0" for every paragraph.
Closes: #5723
|
|
* [Docx Parser] Move style-parsing-specific code to a new module
* [Docx Writer] Re-use Readers.Docx.Parse.Styles for StyleMap
* [Docx Writer] Move Readers.Docx.StyleMap to Writers.Docx.StyleMap
It's never used outside of writer code, so it makes more sense to scope it under writers really.
|
|
Motivating issues: #5523, #5052, #5074
Style name comparisons are case-insensitive, since those are
case-insensitive in Word.
w:styleId will be used as style name if w:name is missing (this should
only happen for malformed docx and is kept as a fallback to avoid
failing altogether on malformed documents)
Block quote detection code moved from Docx.Parser to Readers.Docx
Code styles, i.e. "Source Code" and "Verbatim Char" now honor style
inheritance
Docx Reader now honours "Compact" style (used in Pandoc-generated docx).
The side-effect is that "Compact" style no longer shows up in
docx+styles output. Styles inherited from "Compact" will still
show up.
Removed obsolete list-item style from divsToKeep. That didn't
really do anything for a while now.
Add newtypes to differentiate between style names, ids, and
different style types (that is, paragraph and character styles)
Since docx style names can have spaces in them, and pandoc-markdown
classes can't, anywhere when style name is used as a class name,
spaces are replaced with ASCII dashes `-`.
Get rid of extraneous intermediate types, carrying styleId information.
Instead, styleId is saved with other style data.
Use RunStyle for inline style definitions only (lacking styleId and styleName);
for Character Styles use CharStyle type (which is basicaly RunStyle with styleId
and StyleName bolted onto it).
|
|
Reduce code duplication, remove redundant brackets, use newtype instead of data where appropriate
|
|
Closes #5545.
|
|
The haddock module header contains essentially the
same information, so the boilerplate is redundant and
just one more thing to get out of sync.
|
|
We had previously walked the document to unwrap sdt/sdtContent and
smartTag tags in `word/document.xml`, but not in the
`word/{foot/end}note.xml` and `word/comments.xml`.
Closes #5302
|
|
Some paths in archives are absolute (have an opening slash) which, for
reasons unknown, produces a failure in the test suite on MS
Windows. This fixes that by removing the leading slash if it exists.
Closes #5277 (previously closed with 4cce0ef but reopened due to this bug).
|
|
This reverts commit 2142bbe572cea00b7bb5ad3e10a3afb26845a1f7.
|
|
Try fixing a parsing error on windows by insisting that the parser use
a Posix filepath library for splitting doc paths in a zipfile. (It
might default on Windows to using a backslash as a separator, while
it's always a forward-slash in zip archives.)
|
|
* clarify function name. We had previously used `getDocumentPath`,
but `Document` is an overdetermined term here. Use
`getDocumentXmlPath` to make clear what we're doing.
* Use field notation for setting ReaderEnv. As we've added (and
continue to add) fields, the assignment by position has gotten
harder to read.
* figure out document.xml path once at the beginning of parsing, and
add it to the environment, so we can avoid repeated lookups.
|
|
Getting the location used to depend on a hard-coded .rels file based
on "word/document.xml". We now dynamically detect that file based on
the document.xml file specified in "_rels/.rels"
|
|
The desktop Word program places the main document file in
"word/document.xml", but the online word places it in
"word/document2.xml". This file path is actually stated in the root
"_rels/.rels" file, in the "Relationship" element with an
"http://../officedocument" type.
Closes #5277
|
|
For some reason, Word in Office 365 Online uses `document2.xml`
for the content, instead of `document.xml`. This causes pandoc
not to be able to parse docx.
This quick fix has the parser check for both `document.xml`
and `document2.xml`.
Addresses #5277, but a more robust solution would be to
get the name of the main document dynamically (who knows
whether it might change again?).
|
|
Quite a few modules were missing copyright notices.
This commit adds copyright notices everywhere via haddock module
headers. The old license boilerplate comment is redundant with this and has
been removed.
Update copyright years to 2019.
Closes #4592.
|
|
There can be overrides for the definitions of certain levels in
numbering definitions. This implements that behavior.
Closes: #5134
|
|
|
|
It had previously been an alias for a tuple.
|
|
These are variants for "complex scripts" like Arabic and
are now treated just like b, i (bold, italic).
Colses #4947.
|
|
|
|
|
|
This seems to be necessary if we are to use our custom Prelude
with ghci.
Closes #4464.
|
|
|
|
Make unwrapSDT into a general `unwrap` function that can unwrap both
nested SDT tags and smartTags. This makes the SmartTags constructor in
the Docx type unnecessary, so we remove it.
Closes #4446
|
|
Previously we had only unwrapped one level of sdt tags. Now we recurse
if we find them.
Closes: #4415
|
|
|
|
We introduce a new module, Text.Pandoc.Readers.Docx.Fields which
contains a simple parsec parser. At the moment, only simple hyperlink
fields are accepted, but that can be extended in the future.
|
|
This will allow us to parse instrTxt inside fldChar tags.
|
|
|
|
This will tell us whether a paragraph break was inserted or
deleted. We add a generalized track-changes parsing function, and use
it in `elemToParPart` as well.
|
|
We're going to want to use it elsewhere as well, in upcoming tracking
of paragraph insertion/deletion.
|