4

What’s In for ‘24 – Real Time Analytics, GenAI (Of Course), and the Revival of M...

 8 months ago
source link: https://www.gigaspaces.com/blog/data-trends
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

New year, new tech. Here’s some insights into the new black, the resurgence of unlikely services, and a look at GenAI from a unique angle. Welcome 2024! 

The revival of microservices to address challenges of high latency and low throughput

Leading off with our first trend, the resurgence of microservices. True, they never really did go out, but they are now back in full force for a different reason – enabling high performance, as organizations continue to decompose monolithic applications into smaller services and reduce risk. While monolithic architecture usually offers excellent throughput and consistency, other drawbacks of these configurations include highly interdependent components and limited scalability. In addition, adopting new technologies is a challenge in these monolithic systems, since the entire stack may need to be re-written. 

Especially in cloud computing, microservices can improve throughput, agility, fault isolation, scalability and easier paths to innovation, continuing the paradigm shift in software architecture of deconstructing systems to ensure that can be developed and deployed more easily and quickly. However, as Martin Fowler has pointed out, going to any extreme is risky. Breaking down a monolith into too many, tiny services throughout the ecosystem can backfire to some degree, by adding challenges with complexity and data consistency, which ironically can add latency. When a system is dealing with thousands of API requests per minute from one service to another, then relaying the same data to multiple microservices at the same time for real time analytics, a highly complex architecture is created, with a huge load on the system that negatively impacts performance. 

When SLAs can no longer be met, it’s time to find the balance between not enough microservices and too many microservices, to enable the highest performance and agility, with the delivery of data services in the form of APIs that meet business objectives. This balancing act is likely to continue for the next while. 

For a full perspective on these trends, listen to the webinar recording of Data Trends That Will Shape Your Business Strategy In 2024.

Empowering employees and streamlining processes through data democratization and data sharing  

For our second trend, empowering the users in the organization via data democratization, we’re linking the microservices we’ve just discussed with the trend towards increased data sharing. Data democratization offers everyone at every level in the organization access to the datasets required to make informed decisions with minimal friction. The adoption of microservices and data services breaks down data silos and enables data democratization; otherwise these initiatives would get drowned in complexity. Users cannot consume data that is spread throughout multiple underlying systems unless the data is broken down into smaller components. 

While it is relatively easy to democratize data on a per project basis, providing data solutions that cross the organization is a much greater challenge. A major pain point in implementing data democratization is how to break down data silos from multiple cloud and on premises systems, to enable the consumption of data in a non-linear fashion (not necessarily application to data store). This democratization includes staff who may not have the technical skills to retrieve data with code; they need simpler methods of accessing data. Add security, role segregation, data quality and privacy into the mix, and the challenge just grows. 

To enable data democratization on an organizational scale, it is beneficial to integrate tools such as data catalogs, and metadata and service repositories that will assist users access the required information. Understanding and automating metadata is critical in this process. 

Real time is the new black

Although real time is subjective – for one organization it could mean milliseconds, for another it could be minutes; almost all organizations now demand the timely delivery of data, even for real time analytics and reporting. This demand is in sync with our increasingly fast-paced world; batch processes and overnight reports just don’t cut it anymore. However, data lakes and many orchestration tools were not designed to handle real time demands, and cannot meet the requirements of real time operations. 

Organizations need to be responsive to events, and these responses need to be based on current data, not stale figures, with real time analytics for up to the minute decisioning. To reflect this pace, organizations have adapted the way that they architect their systems. One example is the manufacturing industry, which still uses many legacy systems. However, the business requires fast responses to events at their edge sites or shops, and must make their decisions based on a number of sources of data, including legacy systems and new types such as IoT. Many of these organizations have chosen to use an operational data hub to unite disparate sources and create a holistic view of their operational data in an ultra quick time frame, enabling real-time decisioning. 

Crossing the GenAI chasm

Many enterprises would like to implement AI within their organizations but are not sure how to get there. The value proposition of GenAI is enormous – we can convert almost any material, including pictures and audio into language, and then use this data across the board. Just ask a question in natural language and get the response in kind. This trend also relates to data democratization as discussed above, offering a very accessible way to extend data sharing efforts throughout the enterprise. 

While some organizations have used large, generic, often expensive LLMs, they are not always sure how to make these models relevant across the enterprise. Some businesses have created customized LLMs, using their internal data to build smaller, specialized models that are continuously retrained for specific use cases. However, this approach can be limiting – since enterprise data changes continuously, keeping local LLMs up to date can be very resource intensive and expensive. Alternative approaches point to Retrieval Augmented Generation (RAG), which lays the groundwork for making the meaning of enterprise data significant to LLMs, without having to retrain them. This allows them to generate more accurate responses that are based on the relevant context of enterprise data. Not all is smooth sailing – ethical considerations abound as more organizations are onboarding AI, using diverse sources of data to train their models and in the data products they create. As a step in this direction, the EU has reached a provisional agreement on the so-called Artificial Intelligence Act. More legislation is likely to follow as the capabilities of AI increase and new scenarios and considerations arise. 

At GigaSpaces, based on our vast experience with real time data and performance we’re looking into how organizations can leverage their own data and make it meaningful for GenAI models, to ensure that LLMs  can generate  trustworthy responses based on relevant sources. 

Going Beyond Entity 360

The shift towards personalizing the user experience and improving engagement has been present for some time, but onboarding real time data to achieve this will become more evident this year. According to IDC research, only 12% of enterprises share customer data across lines of business, using it to improve the customer journey, indicating that organizations need to take action to reach true 360 visibility. It’s not just Customer 360 – it’s Claims 360 and Loyalty 360 – now Customer Data Platforms (CDPs) cover all verticals, creating ‘Entity 360’. Many businesses have teamed up with other companies to extend their loyalty programs to enhance customer retention and for new revenue streams, such as Delta and Starbucks, or Star Alliance that unites a number of airlines. 

These joint programs mean that IT teams need to unite a number of data sources and provide accurate statuses to their customers based on data from all systems – on the fly. The customer becomes the owner of their data, and expects accurate information on their mobile and on any other channels. User expectations are a continuous upward spiral – as the experience improves, expectations rise. Expect ongoing demands in this space. 

Last Words

We’ve discussed a number of directions that promise to shape the data technology sphere. Many are interrelated so that the combined influence of GenAI, real time analytics, data democratization and entity 360 will have a greater impact than each trend on its own. Looking forward to many more influential and prominent advances in data technology this year. 

banner

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK