AI Content Chat (Beta) logo

Techniques 10.Accessibility-aware component test design Assess One of the many places in the software delivery process to consider accessibility requirements early on is during web component testing. Testing framework plugins like chai-a11y-axe provide assertions in their API to check for the basics. But in addition to using what testing frameworks have to offer, accessibility-aware component test design further helps to provide all the semantic elements needed by screen readers and other assistive technologies. Firstly, instead of using test ids or classes to find and select the elements you want to validate, use a principle of identifying elements by ARIA roles or other semantic attributes that are used by assistive technologies. Some testing libraries, like Testing Library, even recommend this in their documentation. Secondly, do not just test for click interactions; also consider users who cannot use a mouse or see the screen, and consider adding additional tests for the keyboard and other interactions. 11.AI-aided test-first development Assess Like many in the software industry, we’ve been exploring the rapidly evolving AI tools that can support us in writing code. We see many people feed ChatGPT with an implementation, and then ask it to generate tests for that implementation. However, because we’re big believers in TDD, and we don’t always want to feed an external model with our potentially sensitive implementation code, one of our experiments in this space is a technique we call AI-aided test-first development. In this approach, we get ChatGPT to generate tests for us, and then a developer implements the functionality. Specifically, we first describe the tech stack and the design patterns we’re using in a prompt “fragment” that is reusable across multiple use cases. Then we describe the specific feature we want to implement, including the acceptance criteria. Based on all that, we ask ChatGPT to generate an implementation plan for that feature in our architectural style and tech stack. Once we sanity check that implementation plan, we ask it to generate tests for our acceptance criteria. This approach has worked surprisingly well for us: It required the team to come up with a concise description of their architectural style and helped junior developers and new team members code features aligned with the team’s existing style. The main drawback of this approach is that even though we don’t give the model our source code, we still feed it potentially sensitive information such as our tech stack and feature descriptions. Teams should ensure they’re working with their legal advisors to avoid any intellectual property issues, at least until a “for business” version of these AI tools becomes available. 12.Domain-specific LLMs Assess We’ve featured large language models (LLMs) like BERT and ERNIE in the Radar before; domain- specific LLMs, however, are an emerging trend. Fine-tuning general-purpose LLMs with domain- specific data can tailor them for various tasks, including information retrieval, customer support augmentation and content creation. This practice has shown promising results in industries like legal and finance, as demonstrated by OpenNyAI for legal document analysis. With more organizations experimenting with LLMs and new models like GPT4 being released, we can expect more domain-specific use cases in the near future. © Thoughtworks, Inc. All Rights Reserved. 15

Immersive Experience — Vol 28 | Thoughtworks Technology Radar - Page 15 Immersive Experience — Vol 28 | Thoughtworks Technology Radar Page 14 Page 16