5

Working with data in motion at clothing brand Boden

 3 years ago
source link: https://diginomica.com/working-data-motion-british-clothing-brand-boden
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Working with data in motion at clothing brand Boden

By Gary Flood

July 27, 2021

Audio mode

Dyslexia mode

Founded in 1991 by former stockbroker Johnnie Boden, Boden is a British clothing retailer selling primarily online, mail order and catalogue in several countries, with websites for the United Kingdom, the United States, Germany, and Australia. And as a clothing retailer with more than 1.5 million customers operating in a competitive and constantly shifting environment, being ready to respond rapidly to situations while delivering consistently high levels of great CX (customer experience) is defined by the company as critical.

A number of years back, Boden, which has around 800 employees, realized the legacy IT it had kept over from its original catalogue-driven sales approach was struggling to keep up with the company's growth. Alex Ives, Head of Business and Enterprise Architecture at the company, looks after the data team which stretches horizontally across all of three internal areas of customer, product and price, and stock and order. He also has responsibility for digital and browsers. Ives says:

We had a self-built e-commerce platform - indeed everything was built by us. But the batch, legacy monoliths we relied on for years after boden.com was launched in 1999 were no longer sufficient to enable the real-time responsiveness needed in retail today. Ultimately, the systems in place for catalogue-driven sales were struggling to keep up with our growth and new omnichannel, digital-first approach.

‘A completely new IT architecture'

Identified as crucial for any proposed new set of core business systems would be modernization of essential processes, such as order management and facilitating the final shift from catalogue to online sales. Leadership also wanted to move away from the running reports overnight, and instead wanted to see what was happening on the web instantly.

Most Boden apps have been replaced now, with Adobe as a CMS, Bloomreach for merchandising and Riversand driving product information. But at the heart of the new way of working is a completely new IT architecture based on microservices and event streaming. Four years or so ago, this was identified, Ives told us, as needing to be based on Apache Kafka. Used to handle real-time data feeds, Kafka is one of the most successful open source projects globally and over 70% of the Fortune 500 are estimated to be using it currently. As part of Boden's digital transformation, Ives and his team have found particular value in something called ‘data in motion' from an outfit called Confluent.

Kafka was originally worked on by the founders of the latter company when they worked at LinkedIn a decade ago, with the supplier now offering software to help organizations manage the platform, especially for ingesting real-time data from multiple sources (apps, databases, SaaS layers and cloud providers), constantly streaming them across their organization.

Ives explains how this can be used to help customers: 

We use the streaming technology to effectively create a session out of the activity that says, if you have an action that occurs within 30 minutes of your previous action, that all is part of one session-one visit to our site, effectively. This then allows us to then really understand our customer behaviour, optimize the experience for them, and give them back a more relevant product, and an easier journey for them through the Boden site.

A continuous flow of data

As for what's the next step for Apache Kafka and this new digital way of working at Boden, Ives says:

The key role of this software is to support our enterprise architecture technology direction-to help us really leverage data in the future and drive a truly data-driven organization. So where we want to get to ultimately is that the main microservices that represent the key data constructs of the business-like product domain and imagery, domain, stock price, that sort of thing-will be responsible for producing the business events that define every single change that occurs within our data domain. 

They will then get vented out in real time into Kafka, where it then becomes democratized, so any use case in the business can consume that single view of that domain data and multiple people will also be able to consume it all at the same time.

If the team can get there, that will be quite the contrast to how Boden was working previously. As Ives notes, traditionally, retail inventory management was done overnight, by batch, but now will be more of a real-time view of inventory that spans stores and distribution centres and has full visibility into the status of every order. 

Even better, feedback loops of appropriate recommendations to consumers is also a very powerful use of such speeded-up data, and Boden can now also track the real-time status of inventory and suggest alternative products if something is out of stock, plus provide an accurate and constantly updated delivery time to the customer. 

A continuous flow of data is also starting to help the company streamline back-end operations, so rather than restocking based on perhaps out of data overnight data, a real-time flow can drive a dynamic warehouse operation to minimise the risk of over- or under-stocking.

Ives also believes that by using the software he has chosen to run on top of Kafka, he has a great way of getting all his data connected quickly and easily via a fully managed service, and so is now able to benefit from modernized infrastructure, real time analytics, enhanced CX, improved efficiency and cost reduction, and even increased clickthrough rates and revenue.

A data-rich future

As to where all this goes next, Ives explains: 

What we will do is we push everything that runs through any Kafka topic as a business event, directly into its cloud data warehouse. Then we push it all into a single column table that effectively becomes our semi-structured Data Lake - we have one table per topic. And what we're doing here is, we're collecting all of that event data to create this really rich transaction log of everything that has ever occurred, and as use cases for AI and learning develop and improve, that is a strategic position that will allow us to really utilise that technology in the future.

I think one of the frustrating things has always been when you find a use case that you want to analyse and dig into for good reasons, typical systems, what they do is they aggregate the data to the level at which they need to serve it for the performance of the application and then update it and the history has gone. But by having all of this event driven micro=service layer, all of the history is captured in the events and persisted forever, so this really opens up the flexibility and the future potential of our data infrastructure.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK