AI Content Chat (Beta) logo

Languages and Frameworks 102. Kotlin Kover Assess Kotlin Kover is a code coverage tool set designed specifically for Kotlin, supporting Kotlin JVM, Multiplatform and Android projects. The significance of code coverage lies in its ability to spotlight untested segments, which reinforces software reliability. As Kover evolves, it stands out because of its ability to produce comprehensive HTML and XML reports, coupled with unmatched precision tailored to Kotlin. For teams deeply rooted in Kotlin, we advise you to assess Kover to leverage its potential in enhancing code quality. 103. LangChain Assess LangChain is a framework for building applications with large language models (LLMs). To build practical LLM products, you need to combine them with user- or domain-specific data which wasn’t part of the training. LangChain fills this niche with features like prompt management, chaining, agents and document loaders. The benefit of components like prompt templates and document loaders is that they can speed up your time to market. Although it’s a popular choice for implementing Retrieval- Augmented Generation applications and the ReAct prompting pattern, LangChain has been criticized for being hard to use and overcomplicated. When choosing a tech stack for your LLM application, you may want to keep looking for similar frameworks — like Semantic Kernel — in this fast-evolving space. 104. LlamaIndex Assess LlamaIndex is a data framework designed to facilitate the integration of private or domain-specific data with large language models (LLMs). It offers tools for ingesting data from diverse sources — including APIs, databases and PDFs — and structures this data into a format that LLMs can easily consume. Through various types of “engines,” LlamaIndex enables natural language interactions with this structured data, making it accessible for applications ranging from query-based retrieval to conversational interfaces. Similar to LangChain, LlamaIndex’s goal is to accelerate development with LLMs, but it takes more of a data framework approach. 105. promptfoo Assess promptfoo enables test-driven prompt engineering. While integrating LLMs in applications, tuning of the prompts to produce optimal, consistent outputs can be time-consuming. You can use promptfoo both as a CLI and a library to systematically test prompts against predefined test cases. The test case, along with assertions, can be set up in a simple YAML config file. This config includes the prompts being tested, the model provider, the assertions and the variable values that will be substituted in the prompts. promptfoo supports many assertions, including checking for equality, JSON structure, similarity, custom functions or even using an LLM to grade the model outputs. If you’re looking to automate feedback on prompt and model quality, do assess promptfoo. © Thoughtworks, Inc. All Rights Reserved. 45

Thoughtworks Technology Radar - Page 45 Thoughtworks Technology Radar Page 44 Page 46