AI Content Chat (Beta) logo

wide misuse. For example, tools that facilitate sharing code between Lambdas or orchestrate complex interactions might solve a common simple problem but are then at risk of recreating some terrible architecture antipatterns with new building blocks. If you need a tool to manage code sharing and independent deployment across a collection of serverless functions, then perhaps it’s time to rethink the suitability of the approach. Like all technology solutions, serverless has suitable applications but many of its features include trade-offs that become more acute as the solution evolves. Engineering rigor meets analytics and AI We’ve long viewed “building in quality” as a vital aspect of developing reliable analytics and machine learning models. Test-driven transformations, data sanity tests and data model testing strengthen the data pipelines that power analytical systems. Model validation and quality assurance are crucial in tackling biases and ensuring ethical ML systems with equitable outcomes. By integrating these practices, businesses become better positioned to leverage AI and machine learning and forge responsible, data-driven solutions that cater to a diverse user base. The corresponding tooling ecosystem has continued to grow and mature. For example, Soda Core, a data quality tool, allows the validation of data as it arrives in the system and automated monitoring checks for anomalies. Deepchecks allows for the intersection of continuous integration and model validation, an important step in incorporating good engineering practices in analytics settings. Giskard allows for quality assurance for AI models, allowing designers to detect bias and other negative facets of models, which aligns with our encouragement to tread ethical waters carefully when developing solutions with AI. We view these maturing tools as further evidence of the mainstreaming of analytics and machine learning and its integration with good engineering practices. To declare or program? A seemingly perpetual discussion that happens at every Radar gathering gained particular prominence this time — for a given task, should you write a declarative specification using JSON, YAML or something domain-specific like HCL, or should you write code in a general-purpose programming language? For example, we discussed the differences between Terraform Cloud Operator versus Crossplane, whether to use the AWS CDK or not and using Dagger for programming a deployment pipeline among other cases. Declarative specifications, while often easier to read and write, offer limited abstractions which leads to repetitive code. Proper programming languages can use abstractions to avoid duplication, but these abstractions can make the code considerably harder to follow, especially when the abstractions are layered after years of changes. In our experience, there’s no universal answer to the question posed above. Teams should consider both approaches, and when a solution proves difficult to implement cleanly in one language type, they should reevaluate the other type. It can even make sense to split concerns and implement them with different languages. © Thoughtworks, Inc. All Rights Reserved. 7

Immersive Experience — Vol 28 | Thoughtworks Technology Radar - Page 7 Immersive Experience — Vol 28 | Thoughtworks Technology Radar Page 6 Page 8