Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
Added TikiWiki reader, including tests and documentation.
It's probably not *complete*, but it works pretty well, handles all
the basics (and some not-so-basics).
|
|
|
|
Workaround for different behavior of Data.Unique in different ghc
versions.
|
|
I'm hoping this gives reproducible results on ghc 8.2.1.
|
|
|
|
|
|
This rewrite is primarily motivated by the need to
get macros working properly. A side benefit is that the
reader is significantly faster (27s -> 19s in one
benchmark, and there is a lot of room for further
optimization).
We now tokenize the input text, then parse the token stream.
Macros modify the token stream, so they should now be effective
in any context, including math. Thus, we no longer need the clunky
macro processing capacities of texmath.
A custom state LaTeXState is used instead of ParserState.
This, plus the tokenization, will require some rewriting
of the exported functions rawLaTeXInline, inlineCommand,
rawLaTeXBlock.
* Added Text.Pandoc.Readers.LaTeX.Types (new exported module).
Exports Macro, Tok, TokType, Line, Column. [API change]
* Text.Pandoc.Parsing: adjusted type of `insertIncludedFile`
so it can be used with token parser.
* Removed old texmath macro stuff from Parsing.
Use Macro from Text.Pandoc.Readers.LaTeX.Types instead.
* Removed texmath macro material from Markdown reader.
* Changed types for Text.Pandoc.Readers.LaTeX's
rawLaTeXInline and rawLaTeXBlock. (Both now return a String,
and they are polymorphic in state.)
* Added orgMacros field to OrgState. [API change]
* Removed readerApplyMacros from ReaderOptions.
Now we just check the `latex_macros` reader extension.
* Allow `\newcommand\foo{blah}` without braces.
Fixes #1390.
Fixes #2118.
Fixes #3236.
Fixes #3779.
Fixes #934.
Fixes #982.
|
|
in Text.Pandoc.Lua. Also to pushPandocModule.
This change allows users to override pandoc.lua with a file
in their local data directory, adding custom functions, etc.
@tarleb, if you think this is a bad idea, you can revert this.
But in general our data files are all overridable.
|
|
|
|
* HTML reader: Use the lang value of <html> to set the lang meta value.
* Fix for pre-AMP environments.
|
|
|
|
Do not parse 3 dashes as horizontal rule and allow whitespace after rule
|
|
|
|
|
|
|
|
* New module Text.Pandoc.Readers.Vimwiki, exporting readVimwiki [API change].
* New input format `vimwiki`.
* New data file, `data/vimwiki.css`, for displaying the HTML produced by this reader and pandoc's HTML writer in the style of vimwiki's own HTML export.
|
|
|
|
This reverts commit 0ab26ac9ebb0196691ec064820eac4e640f0d52c.
Failed experiment.
|
|
|
|
* XML.toEntities: changed type to Text -> Text.
* Shared.tabFilter -- fixed so it strips out CRs as before.
* Modified writers to take Text.
* Updated tests, benchmarks, trypandoc.
[API change]
Closes #3731.
|
|
Readers: Renamed StringReader -> TextReader.
Updated tests.
API change.
|
|
Tags are appended to headlines by default, but will be omitted when the
`tags` export option is set to nil.
Closes: #3713
|
|
The Emacs default is to include tags in the headline when exporting.
Instead of just empty spans, which contain the tag name as attribute,
tags are rendered as small caps and wrapped in those spans.
Non-breaking spaces serve as separators for multiple tags.
|
|
Until now, org-ref cite keys included special characters also at the
end. This caused problems when citations occur right before colons or
at the end of a sentence.
With this change, all non alphanumeric characters at the end of a cite
key are ignored.
This also adds `,` to the list of special characters that are legal
in cite keys to better mirror the behaviour of org-export.
|
|
Emacs parses org documents into a tree structure, which is then
post-processed during exporting. The reader is changed to do the same,
turning the document into a single tree of headlines starting at
levelĀ 0.
Fixes: #3695
|
|
|
|
Parsing of smart quotes and special characters can either be enabled via
the `smart` language extension or the `'` and `-` export options. Smart
parsing is active if either the extension or export option is enabled.
Only smart parsing of special characters (like ellipses and en and em
dashes) is enabled by default, while smart quotes are disabled.
This means that all smart parsing features will be enabled by adding the
`smart` language extension. Fine-grained control is possible by leaving
the language extension disabled. In that case, smart parsing is
controlled via the aforementioned export OPTIONS only.
Previously, all smart parsing was disabled unless the language extension
was enabled.
|
|
|
|
It is required to trigger Muse table rendering.
|
|
|
|
Closes: #3401
|
|
The implicitly defined global filter (i.e. all element filtering
functions defined in the global lua environment) is used if no filter is
returned from a lua script. This allows to just write top-level
functions in order to define a lua filter. E.g
function Emph(elem) return pandoc.Strong(elem.content) end
|
|
The reader now correctly parses src block parameter list even if
parameter arguments contain multiple words.
Closes: #3477
|
|
Source block parameter names are no longer prefixed with *rundoc*. This
was intended to simplify working with the rundoc project, a babel
runner. However, the rundoc project is unmaintained, and adding those
markers is not the reader's job anyway.
The original language that is specified for a source element is now
retained as the `data-org-language` attribute and only added if it
differs from the translated language.
|
|
The line-numbering switch that can be given to source blocks (`-n` with
an start number as an optional parameter) is parsed and translated to a
class/key-value combination used by highlighting and other readers and
writers.
|
|
Closes: #3577
|
|
Closes: #3576
|
|
Allow to use functions named `SingleQuoted`, `DoubleQuoted`,
`DisplayMath`, and `InlineMath` in filters.
|
|
|
|
Pandoc elements are pushed and pulled from the lua stack via custom
instances.
|
|
|
|
Plain text readers are exposed to lua scripts via the `pandoc.reader`
submodule, which is further subdivided by format. Converting e.g. a
markdown string into a pandoc document is possible from within lua:
doc = pandoc.reader.markdown.read_doc("Hello, World!")
A `read_block` convenience function is provided for all formats,
although it will still parse the whole string but return only the first
block as the result.
Custom reader options are not supported yet, default options are used
for all parsing operations.
|
|
* New module Text.Pandoc.Writer.JATS exporting writeJATS.
* New output format `jats`.
* Added tests.
* Revised manual.
|
|
* New module: Text.Pandoc.Writers.Ms.
* New template: default.ms.
* The writer uses texmath's new eqn writer to convert math
to eqn format, so a ms file produced with this writer
should be processed with `groff -ms -e` if it contains
math.
|
|
* Add `--lua-filter` option. This works like `--filter` but takes pathnames of special lua filters and uses the lua interpreter baked into pandoc, so that no external interpreter is needed. Note that lua filters are all applied after regular filters, regardless of their position on the command line.
* Add Text.Pandoc.Lua, exporting `runLuaFilter`. Add `pandoc.lua` to data files.
* Add private module Text.Pandoc.Lua.PandocModule to supply the default lua module.
* Add Tests.Lua to tests.
* Add data/pandoc.lua, the lua module pandoc imports when processing its lua filters.
* Document in MANUAL.txt.
|