Microsoft’s Semantic Kernel, an SDK for AI programming, will provide an abstraction layer for OpenAI’s just-announced AI Assistants API, according to Principal Programming Manager Matthew Bolaños in a post yesterday – but some developers are finding Semantic Kernel confusing and difficult to use.
The open source SDK, which can be used with OpenAI and other platforms such as Hugging Face, can orchestrate and combine AI models and plugins. It supports C#, Python and Java, with TypeScript coming soon. According to Microsoft Semantic Kernel is “at the center of the copilot stack” and is aimed at enterprise developers integrating AI into existing apps – likely to be a strategic area as organizations try to figure out how the explosion of AI resources might be put to real-world use. Semantic Kernel is open source on GitHub.
OpenAI introduced its Assistants API at a DevDay yesterday, along with the preview of a GPT-4 Turbo model with a 128K context window, allowing long prompts of hundreds of pages of text and reproducible outputs using a “seed” parameter, which makes the model return a consistent response “most of the time,” according to the team.
The Assistants API, as its name suggests, is an API in beta for developing assistants, where an assistant is a “purpose-built AI that has specific instructions, leverages extra knowledge, and can call models and tools to perform tasks.” Examples mentioned are coding assistants, vacation planners, and a smart visual canvas.
The Assistants API itself supports infinitely long threads, so the developer no longer needs to manage thread state but can simply add messages to an existing thread. It also includes a Python code interpreter, the ability to generate graphs and charts, the ability to invoke custom functions, and support for additional knowledge data beyond the OpenAI models, such as company data or product information. Samples using the API with Python, Node.js or Curl (REST API) are documented here.
According to Bolaños, the Assistants API was the missing piece in Microsoft’s plans for Semantic Kernel, which will, he claims, make it easy to extend OpenAI assistants and allow flexibility over choice of models, the ability to create complex multi-step plans, simplified function calling, and improved visibility and monitoring of token usage, important for tracking costs.
That sounds good, but as Bolaños acknowledged earlier this month, developers have run into challenges with Semantic Kernel, or simply not used it, preferring to code directly against OpenAI or other APIs. Semantic Kernel has become “increasingly more complex” he said, despite still being in preview, which has “caused confusion for new and existing users alike.”
A C# developer echoed these concerns in a discussion on GitHub. He built samples in Python, Semantic Kernel with C#, and in C# with minimal library support. He was able to complete the first and last, but said “I was not able to finish my demo applications with Semantic Kernel as I was struggling as fighting with the API and the interfaces all the time.”
A key concern is that Semantic Kernel, with its multi-language support, does not feel familiar for a .NET developer, even with the C# SDK.
Developers trying the Assistants API should also note early comments from developers about costs, since large-scale testing could result in a bill shock. “The cost for my simple assistant was roughly $0.90 (GPT4-1106) while only asking less than 10 questions,” said one developer, and another, “I waited over 5 minutes for an answer … looked at usage: $1.26!.”