A large number of LLMs Large language models (LLMs) form the basis for many modern breakthroughs in AI. Much current experimentation involves prompting chat-like user interfaces such as ChatGPT or Bard. Fundamentally, the core competing ecosystems (OpenAI’s ChatGPT, Google’s Bard, Meta’s LLaMA, Amazon’s Bedrock among others) featured heavily in our discussions. More broadly, LLMs are tools that can solve a variety of problems, ranging from content generation (text, images and videos) to code generation to summarization and translation, to name a few. With natural language serving as a powerful abstraction layer, these models present a universally appealing tool set and are therefore being used by many information workers. Our discourse encompasses various facets of LLMs, including self-hosting, which allows customization and greater control than cloud-hosted LLMs. With the growing complexity of LLMs, we deliberate on the ability to quantize and run them on small form factors, especially in edge devices and constrained environments. We touch upon ReAct prompting, which holds promise for improved performance, along with LLM-powered autonomous agents that can be used to build dynamic applications that go beyond question and answer interactions. We also mention several vector databases (including Pinecone) that are seeing a resurgence thanks to LLMs. The underlying capabilities of LLMs, including specialized and self-hosted capabilities, continues its explosive growth. Remote delivery workarounds mature Even though remote software development teams have leveraged technology to overcome geographic constraints for years now, the pandemic’s impact fueled innovation in this area, solidifying full remote or hybrid work as an enduring trend. For this Radar, we discussed how remote software development practices and tools have matured, and teams keep pushing boundaries with a focus on effective collaboration in an environment that is more distributed and dynamic than ever. Teams keep coming up with innovative solutions using new collaborative tools. Others continue to adapt and improve existing in-person practices for activities like real-time pair programming or mob programming, distributed workshops (e.g., remote Event Storming) and both asynchronous and synchronous communication. Although remote work offers numerous benefits (including a more diverse talent pool), the value of face-to-face interactions is clear. Teams shouldn’t let critical feedback loops lapse and need to be aware of the trade-offs they incur when transitioning to remote settings. © Thoughtworks, Inc. All Rights Reserved. 7
Thoughtworks Technology Radar Page 6 Page 8