I was having a chat with Sarah O’Keefe of Scriptorium yesterday and, our conversation turned to one of our favorite topics, enterprise content and how DITA fits into it. All “enterprise scale” organizations have a series of (virtual) tubes that process and deliver their content. Sometimes these tubes are simple and short, and they just go between people’s desks, but we all know that heavily manual systems don’t scale.
So, when organizations want to scale their content production, they typically introduce automation and process. However, scaling doesn’t necessarily mean faster for content production, and in some cases, it may actually mean slower. This is because the goal isn’t time from hands on keys to words in eyeballs, it’s throughput. Faster is important in technical content, especially at the subprocess level, but it’s not everything.
For example, if you’re deploying administration and configuration content and your product is complex, your content strategy will reflect that. You’re definitely going to have reference materials for specific configurations or administration functions, but you’ll probably also want to provide guides and learning materials. And, if you’re providing learning materials, you’ll probably provide versions for in-person instructor lead training (#ilt) and e-learning courses. Now from this single topic, you have (at least) four outputs:
- Documentation (reference)
- Guides and how-tos
- Classroom and ILT materials
This is a lot of throughput from a single source of content. In doing this, you might need to slow down a bit to ensure everything fits together.
But what if your goal is just to post content quickly?
This is where alternate technologies have sprung up in recent years, namely Markdown. Developers can whip out some Markdown for their API and publish it to an internal Wiki or even a docs site in minutes. This is great for certain reference materials; language isn’t super important, classification happens somewhat naturally due to the association with a software module (API endpoint, for example), and the room for confusing the reader is fairly limited.
This, of course, won’t work for the more robust technical content I described early. That content must be reviewed for correctness, terminology, structure, and metadata. All of these attributes must be in line to support the expected output from the Enterprise Content Pipe. That’s an aspirational capitalization. The Enterprise Content Pipe isn’t a proper thing right now (or not that I’ve heard anyway) but I think it should be. We need a name for the thing that produces content with the attributes I just described.
The thing about having a well-defined Enterprise Content Pipe (or more than one) is that you can specify exactly what can be expected to come out the end of it. This is huge for developers and interdepartmental collaborators. If the Learning and Development group knows they can expect certain semantics and metadata from techpubs because the content is coming through the Enterprise Content Pipe, they can take that information and leverage it into new deliverables. Likewise, if another department wants to put some content into the pipe, they know the requirements and it can all flow together smoothly. And, perhaps even cooler, if teams want to reuse each other’s output or delivery systems, they simply need to conform to the rules of the pipe.
This is nothing truly new. People already provide content-as-a-service inside and outside their organization, and that’s essentially what I’m talking about here. I think the difference here is that an Enterprise Content Pipe is more of a unified vision, whereas most CaaS systems are independent.
Back to my original point. Quality is important for content coming out of the Enterprise Content Pipe. We work continuously to improve efficiency and reduce overhead, but in the end, quality is #1 and it takes time. This is Techpubs Speed.