AI Content Chat (Beta) logo

Techniques However, there are challenges and pitfalls to consider. First, LLMs can be confidently wrong, so it’s essential to build mechanisms into your process to ensure the accuracy of results. Second, third-party LLMs may retain and re-share your data, posing a risk to proprietary and confidential information. Organizations should carefully review the terms of use and trustworthiness of providers or consider training and running LLMs on an infrastructure they control. As with any new technology, businesses must tread carefully, understanding the implications and risks associated with LLM adoption. 13.Intelligent guided accessibility tests Assess It can be a bit daunting to make a web application compliant with assistive technologies when you yourself never use them, and you feel like you don’t yet know anything about directives like the Web Content Accessibility Guidelines (WCAG). Intelligent guided accessibility tests are one category of tools that help test if you’ve done the right thing without needing to be an expert on accessibility. These tools are browser extensions that scan your website, summarize how assistive technology would interpret it and then ask you a set of questions to confirm whether the structure and elements you created are as intended. We’ve used axe DevTools, Accessibility Insights for Web or the ARC Toolkit on some of our projects. 14.Logseq as team knowledge base Assess Team knowledge management is a familiar concept with teams using tools such as wikis to store information and onboard new team members. Some of our teams now prefer to use Logseq as a team knowledge base. An open-source knowledge-management system, Logseq is powered by a graph database, helps users organize thoughts, notes and ideas and can be adapted for team use with Git-based storage. Logseq allows teams to build a democratic and accessible knowledge base, providing each member with a personalized learning journey and facilitating efficient onboarding. However, as with any knowledge management tool, teams will need to apply good curation and management of their knowledge base to avoid information overload or disorganization. While similar functionality is available in tools like Obsidian, the key difference lies in Logseq’s focus on consumption, with paragraph-based linking enabling team members to quickly find the relevant context without having to read an entire article. 15.Prompt engineering Assess Prompt engineering refers to the process of designing and refining prompts for generative AI models to obtain high-quality responses from the model. This involves carefully crafting prompts that are specific, clear and relevant to the desired task or application in order to elicit useful outputs from the model. Prompt engineering aims to enhance large language model (LLM) capabilities in tasks like question answering and arithmetic reasoning or in domain-specific contexts. For software creation, you might use prompt engineering to get an LLM to write a story, an API or a test suite based on a brief conversation with a stakeholder or some notes. Developing effective prompting techniques is becoming a valuable skill in working with AI systems. There is debate over whether prompt engineering is an art or science, and potential security risks, such as “prompt injection attacks,” should be considered. © Thoughtworks, Inc. All Rights Reserved. 16

Immersive Experience — Vol 28 | Thoughtworks Technology Radar - Page 16 Immersive Experience — Vol 28 | Thoughtworks Technology Radar Page 15 Page 17