Solving problems with metrics on a technical writing team
I was promoted from technical writer to technical writing manager at an agile software company, and within the first few weeks of my new role, the VP of Services came to me and asked for metrics.
“Metrics about what?” I asked.
“Anything I can share that shows the value your team adds.”
Great. Thanks for the specificity.
Prior to that conversation, our team followed a very simple process. Each writer would identify what needed to be worked on for any given week, work on it, and publish it. We would attend sprint reviews, meet with SMEs, write drafts, get feedback, and publish. It was all pretty standard. We shared our projects and progress in a weekly team meeting. As long as we were busy, and could tell our boss what we completed in a week, all was well.
But while our team had a reputation for delivering high quality content, it was also notoriously “out of date.”
Great content, but it’s out of date
The product we wrote for was marketed as a Platform as a Service (PaaS). For our modest team,
that meant five people created content for a platform that boasted over 40 separate interconnected products. The company released updates and new features every two weeks. Each technical writer on the team was assigned a number of products. Writers monitored the development of their assigned products, attended sprint planning and reviews, and wrote or updated the content as needed. Despite being diligent about all of that, feedback from internal teams was that no one knew who the technical writers were, and the content delivered with the products was out of date.
"No one knew who the technical writers were, and the content delivered with the products was out of date."
That negative feedback from internal teams was discouraging, though not a surprise. The writer’s complained of having too much to do, and being unable to prioritize their work effectively.
We didn’t measure anything. Writer’s took responsibility for their own work, and apparently we inefficiently produced out-of-date content to support our products.
When it came time to provide “metrics,” I had nothing to measure but the negative feedback from internal teams.
Instead of just providing metrics, I set out to solve these problems we were having, and I’d decide what to measure along the way.
These were our problems:
Content was out of date - Due to the rapid release cycle and the ratio of writers to products, it was difficult to maintain all the content adequately.
Writers felt overwhelmed - Since everyone managed their own assignments and reported progress, it was difficult for managers to truly know the writer’s workloads, and writers couldn’t keep up.
Weak relationships with stakeholders - Because writers felt they had so much work to do, they sacrificed stakeholder meetings for time writing content that would just end up out of date anyway.
To solve these problems, the first thing I wanted to do was get a bird’s eye view of all the content our team had to manage and get an idea for how much new content we needed to regularly produce. The solution to this was rather simple. I created a content inventory.
Take stock: The content inventory
Our content inventory was a shared Google spreadsheet that tracked useful information about the content our team managed. Everyone on the team had access to the content inventory and were responsible for maintaining their assigned content. The most important information tracked included the following:
Title
Product
Topic
Writer
Created date
Last updated date
At the very least, this shared spreadsheet gave the entire team a look at who owned what and when they last worked on it. More importantly, we could filter by writer, product, or date and gain insight into the body of work our team managed.
The content inventory gave us a holistic view of our team’s work, but it still didn’t provide much in terms of decision making. So we decided to prioritize everything.
Content priority: Not all products are created equal
As I mentioned above, each writer was assigned a large group of products they were responsible to manage. To the writers, it felt like everything had to be reviewed and updated every two weeks for each release cycle. For example, if each writer was in charge of eight products that released updates and new features every two weeks, the writer would have to connect with each product’s stakeholders, troubleshoot and test functionality within the products, and potentially review what could amount to hundreds of pages of content that included screenshots and other assets that may need swapped out. You get a sense of why content was out of date. Writers would have to choose what to sacrifice to keep up on other aspects of their work.
Writers would have to choose what to sacrifice to keep up on other aspects of their work.
That method wasn’t sustainable.
As it turned out, not every product was created equal. I met with product managers to learn more about their products and their plans for them. From what I learned in these meetings, I created a system to prioritize products. I identified three factors that affected how our team defined a product’s priority.
Business need: Whether or not our company was actively promoting the product. Some products would be marketed more than others and would naturally increase usage of the product.
Development priority: We were able to pull reports on the number of feature-tagged tickets per product per release cycle. We took an average of these tickets over time to identify which products were the most actively worked on. After verifying this with development, it became a factor in how we prioritize our work.
Last updated date: If a piece of content had a last updated date that was unreasonably old, say six months to a year, it would be bumped up in priority.
For each product assigned to a writer, we would assess these prioritization factors and assign a priority to each piece of content within that product. If all three factors were true for a piece of content, it would be assigned the highest priority. If something wasn’t being marketed but we were putting a lot of development resources into it, then it might get a medium priority.
In the content inventory, we added a Priority column and marked all content with its appropriate priority. We also added a Next Audit Date column that, using a formula, automatically set a future audit date.The formula was something like:
If Priority is High, set Next Audit Date to X days after Last Modified Date.
The content inventory now had the following columns:
Title
Product
Topic
Writer
Created date
Last updated date
Priority
Next audit date
With this system in place, each writer could filter a view by their assigned content and future audit date. This would give writers a clear list of prioritized work to review each week. Writers still had to meet with stakeholders and attend sprint planning and review meetings. They would identify work there and plan accordingly. But the content inventory provided focus on the upkeep. Writers didn’t have to spend energy doing all that work to discover what to review, the content inventory did it for them. And when they wrote new content, they simply added a line item and its relevant tracking information. New content would be logged along with updates.
Writers didn’t have to spend energy doing all that work to discover what to review, the content inventory did it for them.
Because of this very simple system, we benefited from the following:
The most important content was rarely out of date again
Writers had more time to focus on maintaining strong collaborative relationships with stakeholders
Writers felt more in control of their workload, and overall were very productive because they could track their work and see their progress.
With those problems resolved, it was time to decide what to measure and report on. With a content inventory, we were logging everything that was important to us as a technical writing team. The spreadsheet allowed us to create simple dashboards that we could use to measure many things. We added more columns with information we cared about, like page counts and URLs. We could see who owned which products and what that equated to in total pages managed. We also added a feedback sheet where we logged the results of content reviews based on defined quality standards. As a manager, I was able to check in on each writer’s progress and help where needed, or reassign tasks to balance workloads.
Content Currency
Since our team had previously been known for producing out of date content, a metric that became very important to us was something we called Content Currency. This metric worked like this: If a piece of content’s Next Audit Date was in the past, the content was flagged as “not current” and would appear on a currency dashboard. Writer’s were expected to keep the currency metric for their assigned products above 85%. We started sharing this information internally so anyone could have a look and see the status of content for products they were involved in.
In the end, a simple content inventory combined with a prioritization system and a made up metric called Content Currency helped our team of technical writers work more efficiently, build confidence, improve strong collaborative relationships, and share important metrics to internal teams and leaders.
Comentarios