Age | Commit message (Collapse) | Author | Files | Lines |
|
This rewrite is primarily motivated by the need to
get macros working properly. A side benefit is that the
reader is significantly faster (27s -> 19s in one
benchmark, and there is a lot of room for further
optimization).
We now tokenize the input text, then parse the token stream.
Macros modify the token stream, so they should now be effective
in any context, including math. Thus, we no longer need the clunky
macro processing capacities of texmath.
A custom state LaTeXState is used instead of ParserState.
This, plus the tokenization, will require some rewriting
of the exported functions rawLaTeXInline, inlineCommand,
rawLaTeXBlock.
* Added Text.Pandoc.Readers.LaTeX.Types (new exported module).
Exports Macro, Tok, TokType, Line, Column. [API change]
* Text.Pandoc.Parsing: adjusted type of `insertIncludedFile`
so it can be used with token parser.
* Removed old texmath macro stuff from Parsing.
Use Macro from Text.Pandoc.Readers.LaTeX.Types instead.
* Removed texmath macro material from Markdown reader.
* Changed types for Text.Pandoc.Readers.LaTeX's
rawLaTeXInline and rawLaTeXBlock. (Both now return a String,
and they are polymorphic in state.)
* Added orgMacros field to OrgState. [API change]
* Removed readerApplyMacros from ReaderOptions.
Now we just check the `latex_macros` reader extension.
* Allow `\newcommand\foo{blah}` without braces.
Fixes #1390.
Fixes #2118.
Fixes #3236.
Fixes #3779.
Fixes #934.
Fixes #982.
|
|
|
|
Text.Pandoc.BCP47 (unexported, internal module).
`getLang`, `Lang(..)`, `parseBCP47`.
|
|
|
|
* New module Text.Pandoc.Readers.Vimwiki, exporting readVimwiki [API change].
* New input format `vimwiki`.
* New data file, `data/vimwiki.css`, for displaying the HTML produced by this reader and pandoc's HTML writer in the style of vimwiki's own HTML export.
|
|
|
|
|
|
|
|
GHC 8.2 is very likely to ship with process-1.6.0.0
and time-1.8.0.1.
Consult:
https://ghc.haskell.org/trac/ghc/wiki/Commentary/Libraries/VersionHistory
|
|
|
|
|
|
It was required when we used hsb2hs but no longer seemes
needed with file-embed.
|
|
|
|
The old code made some unwise assumptions about
how the svg file would look.
See #3580.
|
|
|
|
Support for the `#+INCLUDE:` file inclusion mechanism was added.
Recognized include types are *example*, *export*, *src*, and normal org
file inclusion. Advanced features like line numbers and level selection
are not implemented yet.
Closes: #3510
|
|
Supporting two completely different libraries for fetching
from URLs makes it difficult to trap errors, because of
different error types expected from the libraries.
There's no clear reason not to build with these https-capable
libraires.
|
|
Writer helper functions were defined in the top-level Text.Pandoc
module. These functions are moved to the Writer submodule as to enable
reuse in other submodules.
|
|
Reader helper functions were defined in the top-level Text.Pandoc
module. These functions are moved to the Readers submodule as to enable
reuse in other submodules.
|
|
|
|
|
|
Allow to use functions named `SingleQuoted`, `DoubleQuoted`,
`DisplayMath`, and `InlineMath` in filters.
|
|
The lua filters and custom lua writer system defined very similar
StackValue instances for strings and tuples. These instance definitions
are extracted to a separate module to enable sharing.
|
|
|
|
Pushing values to the lua stack via custom functions is faster and more
flexible.
|
|
|
|
Lua module: add readers submodule
|
|
This reverts commit 1fa15c225b515e1fa1c6566f90f1be363a4d770f.
|
|
These are caught (and lead to exit) in pandoc.hs, but
other uses of Text.Pandoc.App may want to recover in another
way.
Added PandocAppError to PandocError (API change).
This is a stopgap: later we should have a separate constructor
for each type of error.
Also fixed uses of 'exit' in Shared.readDataFile, and
removed 'err' from Shared (API change).
Finally, removed the dependency on extensible-exceptions.
See #3548.
|
|
Plain text readers are exposed to lua scripts via the `pandoc.reader`
submodule, which is further subdivided by format. Converting e.g. a
markdown string into a pandoc document is possible from within lua:
doc = pandoc.reader.markdown.read_doc("Hello, World!")
A `read_block` convenience function is provided for all formats,
although it will still parse the whole string but return only the first
block as the result.
Custom reader options are not supported yet, default options are used
for all parsing operations.
|
|
Also include a sample, `default.theme`, in `data/`.
|
|
|
|
This way people can do
pandoc -s -t jats --filter pandoc-citeproc
and it will just work. If they want to specify a stylesheet,
they still can.
|
|
* New module Text.Pandoc.Writer.JATS exporting writeJATS.
* New output format `jats`.
* Added tests.
* Revised manual.
|
|
This is copied from Martin Fenner's pandoc-jats project:
https://github.com/mfenner/pandoc-jats
|
|
Otherwise builds fail.
|
|
This reverts commit 10d91c147968d2e4d63b99b5b0342624827f416f.
|
|
I think template haskell is robust enough now across platforms
that this will work.
Motivation: file-embed gives us better dependency tracking: if a data
file changes, ghc/stack/cabal know to recompile the Data module.
This also removes hsb2hs as a build dependency.
|
|
The 0.5.0 release of hslua fixes problems with lua C modules on linux.
The signature of the `loadstring` function changed, so a compatibility
wrapper is introduced to allow both 0.4.* and 0.5.* versions to be used.
|
|
pandoc -t ms -o output.pdf input.txt
|
|
* New module: Text.Pandoc.Writers.Ms.
* New template: default.ms.
* The writer uses texmath's new eqn writer to convert math
to eqn format, so a ms file produced with this writer
should be processed with `groff -ms -e` if it contains
math.
|
|
This was caught by our new .travis.yml, which builds from
an extracted sdist tarball instead of the repository.
|
|
* Add `--lua-filter` option. This works like `--filter` but takes pathnames of special lua filters and uses the lua interpreter baked into pandoc, so that no external interpreter is needed. Note that lua filters are all applied after regular filters, regardless of their position on the command line.
* Add Text.Pandoc.Lua, exporting `runLuaFilter`. Add `pandoc.lua` to data files.
* Add private module Text.Pandoc.Lua.PandocModule to supply the default lua module.
* Add Tests.Lua to tests.
* Add data/pandoc.lua, the lua module pandoc imports when processing its lua filters.
* Document in MANUAL.txt.
|
|
|
|
This contains a list of strings that will be recognized by pandoc's
Markdown parser as abbreviations. (A nonbreaking space will
be inserted after the period, preventing a sentence space in
formats like LaTeX.)
Users can override the default by putting a file abbreviations
in their user data directory (`~/.pandoc` on *nix).
|
|
|
|
|
|
|
|
|
|
Closes #1905.
Removed stateChapters from ParserState.
Now we parse chapters as level 0 headers, and parts as level -1 headers.
After parsing, we check for the lowest header level, and if it's
less than 1 we bump everything up so that 1 is the lowest header level.
So `\part` will always produce a header; no command-line options
are needed.
|