3

The “in vitro” strategy for mitigating product debt

 3 years ago
source link: https://uxdesign.cc/the-in-vitro-strategy-for-mitigating-product-debt-a38425c1b06e
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

The “in vitro” strategy for mitigating product debt

Test tubes in a rack
Photo by Bill Oxford on Unsplash

In my reflections on working on Autodesk Shotgun, I noted that our biggest struggle internally was mitigating the product debt that had built up over a number of years, and that there was division amongst the team on throwing everything out and starting over, or trying to fix it in incremental steps. Ultimately, the approach we ended up going with was something different, something I called our “In Vitro Strategy”.

As described in Visualizing the systems behind our designs, our situation was that we had accumulated too much product debt:

As time increases so does the cost to build, if product debt is not addressed, while meanwhile customer value falls
As time increases so does the cost to build, if product debt is not addressed, while meanwhile customer value falls
Behaviour Over Time Graph for Product Debt (by , used with permission). We were squarely in the red area.

This meant it was disproportionately expensive to make any sort of changes. That in turn made us much more cautious, making small changes slowly. The perception from customers’ perspective was that we weren’t delivering any significant increased value month to month (which is hugely problematic for a SaaS product). Eventually enough internal and external pressure would build up that meant we had to do something, and fast. We did the only thing we could do, which was to take on more technical debt, and so the cycle continued.

During this time, we tried to address the product debt head on. A refactor was out of the question; there was too much code to go through, and not enough documentation or institutional knowledge at that point to know how all of it was supposed to work. The front-end of the web product had never had any test coverage whatsoever, meaning things would (and could) break without warning (even when making changes to seemingly unrelated parts of the application), and our solution was lengthy manual QA processes and a lot of risk aversion. Getting test coverage would be a good solution that would allow us to make changes with greater confidence and so move forward faster. Estimates came back that maybe we could get this done in a couple of quarters. A big investment, but one that would hopefully pay for itself over the long term.

Unfortunately, that proved to be rather optimistic. After 3 months of the engineering team working to add test coverage across the product, we’d managed just under 20%. Although that was better than the single-digit coverage we had before, 20% confidence is not very confident. We were expecting to be closer to 50% in that time. Even worse, the developers had tackled the arguably “easier” parts of the application first. They were also getting burned out from the monotony of writing tests. There was no chance we were going to make a difference within a year, let alone another quarter.

We tried creating completely new user experiences (for example, a new way for users to create and browse tasks) that solved a number of problems customers faced every day. The designs were innovative and early user testing yielded positive feedback, but development on these projects ultimately ground to a halt trying to fit them into the legacy application. It was too challenging trying to marry up the different mental models and data models of the new and the old.

Another possibility was to do a complete rewrite of the application from the ground up. The problem was, for all the ways the product needed to be improved, there was a lot of it that was just fine. A rewrite would give us the opportunity to replace things that weren’t good, but it would also require us to essentially recreate things that were working just fine– or at least good enough for the time being. Our permissions system for example was a problem area for our support team, engineers, designers and customers, but as much as creating a new one was desirable, it would be a necessarily long and complicated development that no-one had the stomach for. We would have to commit to building a new product while at the same time urging patience from customers who had already waited a long time.

This is a challenge that is familiar to a number of product lines. There are lots of opinions on how to solve it, from “never rewrite” to “never sunset”, and many others.

has a fantastic piece on many of these approaches, but by far the standout line for me was his takeaway at the end:

Once you’ve learned enough that there’s a certain distance between the current version of your product and the best version of that product you can imagine, then the right approach is not to replace your software with a new version, but to build something new next to it without throwing away what you have.

In our case, the “something new” was not a new product, but rather a new “experience”. We had the foundation of a strategy, one called “in vitro”, after the experimentation process where something is grown in isolation:

  1. Identify an underserved customer need
  2. Formulate a solution as a new, self-contained experience
  3. Build up the new experience in a “test tube” environment
  4. Provide the means to connect to existing customer data
  5. Test with a sample of customers
  6. “Transplant” into the core product
  7. Ship it!

With this approach, our existing product would continue to function as it currently does (effectively we’d cease feature development inside the legacy product and just provide bug fixes), which would free us to focus entirely on addressing a single customer need, and only that need.

Flow diagram showing how an underserved need could be solved independently from our legacy product
Example flow for the creation of a new experience

Rather than try and make incremental improvements to inferior experiences, we could decouple and create something more delightful, focused on user experience and a more holistic future vision of the product. Rather than reinventing the wheel and committing to rebuilding everything the product already does well (or adequately), we focus on a smaller slice and replace redundant parts as needed. What’s more, this would give us greater latitude for user testing, as those tests could be better contained, outside of the context of the larger, more complicated product. It would also allow us to take bigger risks, because any new experience we built could be safely killed off at any point before launching it to everyone.

Before we could properly evaluate whether this strategy would be effective, we had to build a new development environment that was self-contained. This development environment would have 100% test coverage as standard, take advantage of modern coding practices, and be able to connect to existing customer data in some way. This last part was crucial, as we wanted to make sure we could test our solutions against real-world data sets (and that ultimately customers could try the solutions themselves) before they were launched.

To avoid the need to recreate existing business logic from the legacy application, our approach was that we’d refactor the legacy app when needed. extracting logic to an independent library that would then be consumed by both the legacy application and the new experience (as a byproduct, this would help encapsulate the most critical business logic over time, and force us to increase the resilience of those parts).

The final piece in the puzzle would be the “transplantation” phase. With this, the objective would be to take the fully developed new experience and put it inside the legacy app. Details of what exactly this would entail were deferred until the specific solution was clearer, but some options were to potentially replace a particular page or set of pages, embedded within an iframe, or even leave it as a separate self-contained experience, and just introducing a link to it from the legacy app. Subsequently, we would remove outdated or redundant functionality from the legacy app, which would (in theory at least) be easier than completely replacing it.

Creating the development environment went fairly quickly; the team of 2 or 3 fantastic engineers (and UX architect) who worked on it all relished the idea of creating something new and modern, taking about 3 months to have something working. For our first project that would use the development environment, we found that it was useful to actually defer building in favour of experimenting first in more of a hi-fi prototyping environment (we built experiments in Storybook first using mock data), and then once the solution had proved robust enough we committed to building it in the new development environment.

And it was fast! Compared to what we’d been used to, where a single subpar experience would take around a quarter to create, we were now seeing engineers able to create fully test-covered solutions in less than a day, sort of like you’d expect from any modern software company. It meant that our process improved in other ways too– for example designers were able to work more interactively with developers, providing feedback and suggestions and seeing them tried out in real time. It was refreshing, and something I’ll be trying again next time the situation warrants it.

The UX Collective donates US$1 for each article we publish. This story contributed to World-Class Designer School: a college-level, tuition-free design school focused on preparing young and talented African designers for the local and international digital product market. Build the design community you believe in.

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK