Following our advice on how to convince your CFO to invest in modernising your systems, this article describes some techniques for re-engineering away from a legacy system to a modular, test driven, reactive architecture running on JVM, the Java platform.
Measure once, (ideally twice!)
Before anything else, you need to understand what you need from the re-platform, and should attempt to put some measurable and meaningful value to this.
First, consider how legacy problems manifest themselves. Do features take an eternity to be developed? Or is the hold up with releases? Perhaps your system poses operational challenges in monitoring and support? Or maybe your Software Development Life Cycle process needs refinement?
This is really where a holistic view of the system can be useful. Using a techniques like value stream mapping (either formally or in an ad hoc fashion), it should be possible to track the flow of the development process and analyse it for problems. From experience, and to add context to this, it can be useful to look at:
- How many tools are used at each stage of a process
- How long the stage takes to complete
- Whether any stages blocks another from progressing
- What the entry and exit criteria are for a stage
Fears for tiers
Let’s assume that your analysis suggests that the problem lies with the technology stack that handles the business logic, storage and processing. Note: this is typically considered to be the heavy lifting service tier.
Again, proceeding by assumption, let’s also suppose that this tier had been written to run on the Java platform (the JVM), and having measured metrics on the code, and reviewed the supporting automated test suite, the stack employed here was considered to be reasonably complex and brittle. Or, at least, the stack only garnered a low level of confidence in verifying the system’s resilience to change. This makes a good case for directly tackling the issue of the service tier through either refactor or replacement.
Buffet style architecture
Two useful patterns that could be employed to remedy this situation are creating service facades and an implementation of the Strangler pattern that I’ve come to label the read, write, executioner technique.
Assuming the underlying legacy implementation is spaghetti code, I consider these approaches as putting a buffer of sauce onto the spaghetti, versus migrating from the main course to a series of delicious (and healthier!) side dishes.
Service facades typically try to decouple the implementation of the service from the interface. As such, they often expose an API that is resilient to change and internally perform some level of attribute mapping to translate client request attributes to those parameters expected by the systems they act as a proxy to.
Once this barrier is in place, refactor and replacement of those legacy core logic services is made much easier, especially if the facade is developed in a test-driven fashion. One challenge with this approach is that it can take some time in both analysis and development to determine all clients and paths of execution to the underlying service and redirect them through the facade.
An alternative approach is to build to the side of the legacy application with a new application that reaches out to the borders of the previous application. For instance, so that the new applications talks to the database or external APIs, but otherwise is a replacement for that legacy application.
A common use case for this is when the legacy application needs to be modified or extended in some way. To reduce the surface area of change, one route is to move over all read access to the underlying data to the new service. This has the added benefit that it can enable rewriting the persistence store (if necessary) as the data could be extracted, transformed and loaded into a read-only slave data store.
Once the read paths have been diverted, it would be a case of migrating the write pathways to the new service. Finally, changing the database schema access for the legacy application to the database and smoke testing should allow that piece of software to be deprecated and removed.
Ingredients for success
From experience in previous re-platforming projects, I know it can be extremely tempting to swing the technology pendulum towards both emergent technologies. This is typically assuming the caveat that any emerging technology provides tangible and enumerable benefits and has been tested and endorsed by enterprise users of your size. Given this backdrop, deployment targets such as Docker containers and unikernel applications are becoming increasingly popular as build artefacts and there is a growing catalogue of associated support tools for this ecosystem.
In terms of language choice, if developing on the JVM, Scala is becoming an increasingly popular choice. Some of the factors that are driving developers towards Scala include:
- The ability for Scala to capitalise on existing JVM investment - both in terms of access to Java-based libraries, infrastructure and tooling.
- The additional verification and up front safety support in the Scala language due to its rich type system. Specifically, the static type system and use of dependent types, ensure a whole class of errors and anomalies that can happen in languages such as Java are eradicated in Scala. Also, the type system allows for a rich ‘follow the types’ style programming guidance, to enable a more efficient and guided developer experience.
- First-class support for functional programming allows Scala programs to be composed of smaller, reusable, pieces that aim to be less influenced by the context and scheduling of their execution.
- The support and promotion of a more consistent object model (as opposed to Java) also helps to remove a number of common programming errors and issues.
- The ecosystem of libraries around Scala and the functional domain that are designed with reactive goals in mind (typically manifested by the ability for Scala-based applications to be elastic in nature and able to horizontally scale up/down based on client demand).
- Implicit types, type enrichment, and view and context bounds allow for easier (and often transparent) support of mapping from client interfaces to legacy structures.
As ever, the devil is in the detail when trying to evolve a platform architecture while having to support a legacy system, and this is where experience and collaborative working usually come to the fore.
However deep a legacy problem you find yourself, there are always ways to architect a system out of corner. Scala presents a reasoned case as an escape tool of choice. To learn more about Scala visit inviqa.com/scala
About the author
Kingsley Davies is the Head of our Scala practice and has been developing software on the JVM since 2000. He has brought his Scala design and delivery skills to the likes of Betfair, Visa Europe, Barclays, and Net-a-Porter, and is a member of the organising committee for Scala eXchange.