A Business Analyst’s response to AI when capturing Requirements

Business Analysis Conference Europe 2023

During the panel discussion session in the BA Conference Europe 2023 on Artificial Intelligence (AI) and Generative AI (GenAI) Ethics and Cybersecurity, the audience was keen to know how AI and GenAI might impact on the way they work. AI alters the landscape for any user of digital technology, and therefore it follows that Business Analysts (BAs) must respond to this challenge (and opportunity) by adapting some of the methods used when capturing requirements for systems. We will face a new vernacular of AI related terminology to translate for our clients and must help our clients stay secure and resilient in an increasingly complex and dynamic landscape, enabling them to take on the opportunity alongside the challenge.

AI Literacy for Functional and Non-Functional Requirements

AI refers to the ability of machines to imitate and perform tasks that have historically required human intelligence. It’s a branch of computer science that focuses on creating intelligent systems capable of learning, reasoning, and making decisions. AI enables machines to analyse and interpret data, recognise patterns, solve problems, and even interact with humans.

BAs, together with other professionals at every level, need to build up their AI literacy to be able to understand the capabilities of AI and how this can impact on requirements. As BAs, we capture functional requirements to help our clients understand what their systems or operations should do, and non-functional requirements to describe how they should do it. Functional requirements open multiple opportunities for use of AI, appropriate to the use case of the system or operational process under review. As BAs, we should be aware of key terminology and concepts within AI and their implications. This includes maintaining our level of awareness as we undergo the rapid evolution of this new technology.

Maintaining Ethical AI

New adaptations in functional requirements could include advising clients on reviewing the ethical implications of AI, ensuring they are alert to new ways in which information builds up which can breach guidelines, such as the personal data act. If using AI algorithms as the basis for decision-making they should also be prepared to identify and mitigate potential bias.

At my company, AtkinsRéalis, our BAs work closely with our special AI Strategy Group (AISG) which is embedded within our Data Intelligence workstream. Our AISG has defined a repeatable methodology for assessing AI ethical risk, a four-phase approach of discovery, design, validation and refinement, and recommendation, that ensures transparency, visibility and public empowerment in AI research and development (R&D) processes. This approach encourages the use of user feedback and research findings to shape the development of selected models. We blend research, industry best practice, and practical knowledge to strengthen models and mitigate perceived risks.

Our BAs also support the development of a Data and AI Ethics Framework, a forward-thinking solution that keeps track of ethical initiatives and informs strategy and policy development, as well as day-to-day and AI solution development. This innovative framework consists of an Ethical Landscape Assessment, an Ethical Solution Delivery Assessment and an Ethical Heatmap. The Ethical Landscape Assessment identifies a unique baseline level of ethical salience for an organisation, capturing its ethical priorities and setting a clear ethical benchmark.

Evolving Regulation and Best Practice

BAs also need to stay abreast of evolving regulatory requirements, industry standards and guidelines related to AI, through research, training and participation in relevant professional events. Regulation is likely to remain somewhat reactive to the fast pace of change within AI development. For example, while AI generated images and videos of celebrities, known as ‘deepfakes’, were hitting the news headlines in 2014 and 2015, the first US federal law was put in place in 2019. Since then, regulation has progressed in Europe and the US, and the EU AI Act may become the template for the first global comprehensive legislation, identifying AI considerations from CV sweeping to scoring mechanisms.

Aside from ethically-targeted regulation, cybersecurity is likely to be another hotly developing concentration area. BAs should ensure they deliver their non-functional requirements to preserve cybersecurity, to control and countermeasure the AI enabled cyberattacks covering intrusion detection and response, the deployment of AI-enhanced firewalls, integrated AI-powered threat intelligence feeds and the organisation’s incident response plan to address AI related security incidents. AI can offer cybersecurity benefits however, through being used to explore the potential for enhancing the organisation’s security posture through tasks such as anomaly detection, predictive analytics and automated incident responses.

Advocating for AI

Despite these cautions, BAs may find themselves taking on a role as AI advocate, highlighting where systems or operations can now run rather than walk thanks to the additional computational power AI offers. We are adept at considering functional and non-functional requirements with an ‘art of the possible’ in mind, and now that possibility horizon is wider than ever. BAs’ capability to bridge the gap between business operations’ professionals and technical experts will be key to taking forward AI’s best factors, while preparing against elements of risk. As BAs, we should take the opportunity to skill up and lead our clients to their best outcomes.

User and employee training

AI, and in particular GenAI, are emerging technologies. It is necessary to raise awareness amongst users and employees about the implication of AI and GenAI on cybersecurity and the role they play in protecting sensitive data and systems. Through education and training, they can avoid falling into the traps of AI-generated forged documentation, synthetic sound and voice, and deepfake images and videos generated by AI. When capturing requirements, BAs should ensure user and employee AI and GenAI cybersecurity training is on the list – it is key to overcoming the challenges and maximising the benefits of AI to the organisation.

~~~~~~

About the Author:

Dr Kitty Hung, Principal Consultant at AtkinsRéalis, is a Fellow of the BCS – the Chartered Institute of IT, and a member of IIBA. With over 24 years’ experience, Kitty is a proficient business and systems analyst across fields including policing, defence, and emerging technologies. She holds the BCS International Diploma in Business Analysis. Kitty excels at analysing complex problems and translating them into effective requirements and solutions. She identifies customers’ pressing technological pain points, especially in cyber security, data analytics, and GDPR compliance. Kitty provides valued advice on improving efficiency, costs, performance and competitiveness. She is a passionate mentor and community volunteer.