An Initial Sphinx-Doc Workflow

Datetime:2017-03-07 05:24:59         Topic: Sphinx          Share        Original >>
Here to See The Original Article!!!

March 7th, 2017, by Matthew Setter Follow @settermjd

Do you use Sphinx-Doc and reStructuredText to manage your project's technical documentation? Do you find that it's a lot of work to ensure content validates and renders correctly? If so, this post walks through an initial workflow which seeks to make the process easier and more efficient.

As the documentation lead for ownCloud , I've spent a lot of time getting to know Sphinx-Doc and its accompanying file format, reStructuredText . Though, despite them being very capable, I'm sold on them as a long-term workflow.

But they're what ownCloud's used for quite some time. And if 20 years of software development experience's taught me anything, migrating away from an incumbent technology (read: to Asciidoc ) is easier said than done. For what it's worth, I trialed a migration to another toolset recently and found that it would take longer than I'd like to give to it.

Given that, before I push for that again, I decided to give Sphinx-Doc and reStructuredText the full benefit of the doubt, and see if I can make the most of them, and in so doing make the process much more efficient. Then, if I'm still convinced that it needs to be replaced with something else, I can know that I've come at this with an open mind, given it a fair go, and made a fair assessment.

Identify the core frustrations

To that end, I assessed the issues which most frustrated me about the two technologies. I found that I didn't have a great issue with either Sphinx-Doc or reStructuredText — aside of reStructuredText's peculiar approach to section headers.

What I found was that my main issue is the effort required to preview and lint the reStructuredText source files. Here's the thing, other formats, such asMarkdown or Asciidoc — Asciidoc particularly — are far easier to write, because you can write and preview almost at the same time.

This is largely because of tools, such as the extensions for Google Chrome and Mozilla Firefox . Once you've installed the one for your browser of choice, you can write and then reload the browser to quickly test that the content renders correctly and if there are any markup errors.

Have a look at the two Firefox screenshots above. The one on the left shows the raw Asciidoc. The one on the right shows the rendered Asciidoc. Using this add-on, while working on my next book, I've found it relatively painless to know how the content will look, and if it renders correctly.

Extra bonus:if you install the live reload add-on in Firefox , you just have to make a change to the file. Firefox will then reload the page, re-rendering the updated content straight afterwards.

No such luck (at least that I've found) with reStructuredText (and the Sphinx-Doc extensions). I did find a repository on GitHub, restructuredtext-lint by Todd Wolfson , that offers a reStructuredText linter. But, it only minimally supports some of the Sphinx-Doc extensions.

As a result, it can only give you a partial assessment of whether the document is valid markup, and secondly you have to still peruse the output to ensure that something's not missed. Secondly, I've not found any previewer either.

Given that, at least up until this point, I had to run the entire build process to ensure that the document rendered correctly. This can be quite a time-consuming process.

To clarify further, what I'd been doing was watching the output when building the PDF version of the documentation after either I'd made a change, or when I was assessing a PR . If there were no notable output errors or warnings, I'd have a quick peruse of the generated PDF document. If it was OK, then I'd assume everything was OK.

There's pros and cons of this approach. The first is that generating a PDF can be quite an expensive operation, based on the requirements of the file format and the sheer size of the ownCloud documentation .

Secondly, if there was an error in the reStructuredText documentation, the build might fail part-way, for not so clear reasons. I'd then have to work through the log output to figure out what happened.

It was at this point that I thought that there had to be a better way . It was at also this point that I appreciated that I didn't need to build the PDF version of the output to review its quality.

To step back a bit, and stop seeming like I'm hacking on Sphinx-Doc and reStructuredText, they're not a bad set of tools.

Seriously!

They're actually a pretty good setup, one that does a hell of a lot. They're a pretty good combination if you're writing technical documentation, which I am. When setup correctly, you can produce your documentation in a wide range of formats from the same source. This includes:

  • PDF
  • ePub
  • HTML
  • JSON
  • text
  • man pages
  • tex

But to me, the overhead isn't worth the effort.

There had to be a Better Way

On appreciating that there were formats other than PDF, specifically HTML, I realised that, while not perfect, there were quicker ways to do what I was doing. Here's what I did to improve my Sphinx-Doc/reStructuredText workflow.

1. Use Editor Addons

Firstly, I installed the appropriate plugins for my editor of choice (MacVim); these are:

For what it's worth, Atom has excellent support for reStructuredText text as well, in the form of two plugins:

With that, I now had a decent level of linting and formatting support built-in. You can see an example of that in the screenshot below.

You can see that it's highlighted the section headers and admonition (as well as the two spelling errors). Now, I can do some self-correction as I go along. After that, before I make a commit, I quick build of the HTML version of the documentation, and preview the section of the docs that I've just changed.

2. Build the HTML, not PDF, Version of the Docs

Next up, I need to be able to regenerate the documentation whenever a change is made. I work on macOS, so I could install all of Sphinx-Doc's dependencies and build it there.

However, I prefer to always work as closely with the environment in which code and docs are deployed. For that reason, if you have a look at the ownCloud docs , you'll see that a Vagrant/Ansible virtual machine setup is available. This simulates, as closely as possible, the deployed environment.

Within that virtual machine, to (re)build one of the manuals in HTML format, I was calling make html-all in that manual’s root directory. If you're not familiar, make is a build automation tool originally developed by Stuart Feldman back in April 1976.

It’s a precursor to other build automation tools, such as Ant and Phing . It supports the ability to create what are referred to as targets, or named sets of build instructions, for performing jobs as part of a software build process. These can cover tasks such as compiling software , running unit tests , and so on.

I believe Sphinx-doc comes with a Makefile. But the one available with the ownCloud documentation is highly customised for the needs of the project.

Getting back to the script, I'm sure it doesn't take too much imagination to see that SSH’ing in to the virtual machine, moving to the required manual directory, and calling the make command will soon become tedious.

So, it's for that reason that I wrote the following shell script, one that can be called, on the local machine, from the root of the documentation, to make the process simpler.

#!/bin/bash

MINPARAMS=1

if [ $# -lt "$MINPARAMS" ]
then
  echo "This script needs at least $MINPARAMS command-line arguments"
fi

WHICH_MANUAL="$1"
AVAILABLE_MANUALS=( admin developer user )

function contains() {
    local n=$#
    local value=${!n}
    for ((i=1;i < $#;i++)) {
        if [ "${!i}" == "${value}" ]; then
            echo "y"
            return 0
        fi
    }
    echo "n"
    return 1
}

if [ $(contains "${AVAILABLE_MANUALS[@]}" "$WHICH_MANUAL") == "y" ]; then
    echo "Rebuilding the $WHICH_MANUAL manual"
    echo
    vagrant ssh -c "cd /opt/documentation/${WHICH_MANUAL}_manual && make html-org"
    echo
    echo "Finished rebuilding the $WHICH_MANUAL manual"
fi

If you're not familiar with shell scripting, that's fine. The script will regenerate the documentation of one of the three manuals, based on the name of one manual provided.

3. Preview the Generated HTML

After regenerating the documentation, assuming all went well, I now just have to preview them, to make sure there are no errors. To do that, they need to be served by a web server. There's no special content that needs any form of dynamic processing. So any web server will do.

When I began writing this article, I was using PHP's built-in web server to serve the documentation from the host machine, as that seemed the simplest thing to do. It was also the quickest solution.

If that's the approach that you'd like to take, then start the built-in web server by running the following command from the root directory of your Sphinx-doc documentation:

php -S localhost:8000 -t ./ &

Now, the documentation can be viewed in any browser by connecting to http://localhost:8000 . However, as I mentioned, ownCloud has three separate manuals, one for admins, one for developers, and one for users.

Given that, starting three separate instances, on three separate ports or starting and stopping and starting the web server will quickly become tedious — just like the manual effort of regenerating the documentation.

For that reason, I altered the Vagrant/Ansible virtual machine configuration to include the NGINX web server, and configured it to support three virtual hosts; one for each of the three manuals.

After that, I added the server names to my local /etc/hosts file. With those changes made, I can now connect to them by hostname from the local machine. No more needing to start and stop web server instances as part of the documentation process.

Some Final Thoughts

If you’re still with me at this point, it all might seem like quite an ordeal to use Sphinx-Doc. You might well be asking yourself why you would bother? Well, let’s summarise the process.

  • First: make sure your editor or IDEhas reStructuredText add-ons installedand enabled
  • Second: generate a HTML copy of the docsusing a shell script when a change is made
  • Third: preview the regenerated documentation, checking it for errors

When you look at it that way, it’s not that involved. Sure, there’s a bit of work to get it started. But once done, it’s rather a trivial process.

While I’m not sold on Sphinx-Doc nor reStructuredText, I have to give credit where credit’s due. It is a good toolchain, one worth using, with a broad range of functionality available, specifically targeted at technical writing.

I just wish that it were easier to lint and review on the fly.

In Conclusion

And that's where I'm up to at this point. It's not perfect. But it's a good start nonetheless.

In thinking about it now, after having reflected on what I've written, this is still a lot of work to maintain the docs. I'd very much like it to be simpler.

Given I'm not as experienced with Sphinx-Doc and reStructuredText as others, perhaps I'm missing a trick and there are ways to do all of this a lot quicker than I'm aware of. Perhaps not. But, effort is always required in anything that we do — if it's of any value.

On top of that, excellent docs are equally as important as excellent code. So some effort and sacrifice are necessary to produce them. I'm hoping that the future changes that I make to the VM will make my life easier. I'm not sure yet if they will. But time will tell.

If you’re a technical writer, one who’s using Sphinx-Doc and reStructuredText, I hope that this has made your daily life that much simpler. I’d love to hear your thoughts if you’re in a similar position, and what you do to make your life easier.

CC Image Courtesy of Sacha Chua on Flickr








New

Put your ads here, just $200 per month.