1

Spring 2024 roadmap for Semantic Kernel

 7 months ago
source link: https://devblogs.microsoft.com/semantic-kernel/spring-2024-roadmap-for-semantic-kernel/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Spring 2024 roadmap for Semantic Kernel

1d86d1edaf0f5321940d7f33bd654187?s=58&d=mm&r=g

Matthew Bolanos

February 12th, 20242 1

Now that it’s February, we wanted to share what we had planned for Semantic Kernel from now until Microsoft Build. Most of our next immediate investments fall into one of three buckets: V1.0 parity across all our languages, additional connectors, and last but not least, agents. If you want a deep dive into our plans, watch our Spring 2024 roadmap video! Otherwise get a quick summary below.

V1.0 Parity across Python and Java

With the V1.0 version of .NET, we committed to not introduce anymore breaking changes to non-experimental features. This has given customers additional confidence to build production AI applications on top of Semantic kernel. By March of this year, we plan on releasing either Beta or Release Candidates of both our Python and Java libraries. By Microsoft Build, we will finish parity and launch V1.0 for Python and Java.

As part of V1.0, Python and Java will get many of the improvements that came to the .NET version that made it much easier and more powerful to use. This includes automatic function calling, events, YAML prompt files, and Handlebars templates. With the YAML prompt files, you’ll be able to create prompt and agent assets in Python and then reshare it with .NET and Java developers.

If you’re interested in learning more, check out our full backlog on GitHub for Python and Java.

More connectors!

Since Semantic Kernel was first introduced, many new models have been introduced. We plan on working with the community to introduce connectors to the most popular models and their deployment types. These include Gemini, Llama-2, Phi-2, Mistral, and Claude deployed on Hugging Face, Azure AI, Google AI, Bedrock and locally.

We’ve also gotten great feedback on our experimental memory connectors. Over the next few months, we’ll be updating the abstractions for our connectors so that they are less opinionated. This will make the easier to use and allow us to support even more scenarios with them.

Lastly, we know that multi-modal experiences are the next frontier for AI applications. We’ll make it easier to support these experiences by providing additional connectors to models that support audio, images, video, documents, and more!

First-class agent support

Lastly, we want to ensure that Semantic Kernel customers are able to develop autonomous agents that can complete tasks on behalf of users. We already have an experimental implementation that uses the OpenAI Assistants API (check out John Maeda’s SK basics samples), but as part of our final push, we want to fully abstract our agent interface to support agents built with any model.

To achieve this, we’re leveraging the research provided by the Autogen team to create an abstraction that can support any number of experiences, including those where agents work together as a team.

Feedback is always welcome!

As an open source project, everything we do (including planning) is done out in the open. We do this so that you as a community can give us feedback every step of the way. If you have recommendations to any of the features we have planned for Spring 2024 (or even recommendations for things that aren’t on our radar), let us know by filing an issue on GitHub, starting a discussion on GitHub, or starting a conversation on Discord.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK