AI Engineer, Melisa Bardhi, joined host John Gilroy of Federal Tech Podcast to share how Excella builds artificial intelligence (AI) and generative AI (GenAI), applications responsibly for federal clients. Melisa and John discussed the transformative potential of GenAI tools, their unique capabilities, and techniques for implementing them responsibly.
Addressing Bias, Fairness, and Data Privacy
GenAI has the potential to be revolutionary for the federal sector, provided applications and services are built using robust safety standards and rigorously evaluated. Melisa highlighted why responsible development and deployment of GenAI tools is crucial:
“These models are trained on massive datasets, and as a result they can reflect existing societal biases, and If they’re not addressed early on at the input source as the models are trained, these biases can be amplified in the output they produce. They’re often also used to learn from datasets that are scraped from the web, which can sometimes contain personal information. Another component of that is ensuring data security early on to prevent the misuse of personal information.”
Building In Explainability and Accountability
Melisa explained that the underlying technology in applications that leverage Large Language Models (LLMs) to produce AI-generated content often operates as a black box, meaning that because of its complexity, it can be difficult to understand how it produced a particular output. Given this complexity, transparency and explainability are necessary elements to building trust in GenAI tools. Achieving this requires a combination of technical approaches and organizational practices. This can involve using interpretability techniques, maintaining detailed logs, and leveraging model cards to build in explainability. Organizational practices, including establishing governance policies and ensuring clear ownership can also help bolster accountability.
When building AI services intended to generate insights to support decision-making, Excella prioritizes solutions that incorporate humans in the loop so that humans remain accountable as the primary decision-makers. Ultimately, AI should empower people to make more informed decisions, rather than replace the decision-makers themselves.
Learn more about Explainable AI.
Encouraging AI Learning and Adoption
Education and hands-on learning are essential for leveraging GenAI effectively and safely. Melisa advises federal workers and organizations to define their objectives clearly and engage in safe, practical learning experiences.
“Given the novelty of this technology, the most important step and takeaway to get started is to engage in hands-on learning in a safe environment. At Excella, GenAI is a horizontal and cross-functional initiative, which means that we leverage it safely throughout the organization, both internally to boost productivity, as well as externally in the solutions that we build for clients.”
Melisa recommended that federal agencies and organizations support internal training and development when it comes to GenAI. With the increasing demand for these solutions, there is a wealth of knowledge and opportunities available at our fingertips to enable learning, workshops, certifications, conferences and online courses.
Excella, for example, recently organized a month-long, asynchronous GenAI Hackathon, inviting all employees to participate. The company-backed Hackathon provided a space for Excellians across all disciplines to learn more about GenAI and develop prototype solutions that solve real world problems. If they haven’t already, agencies should adopt this mindset, work closely with their team members, and ensure they have all of the tools and education they need to implement GenAI safely and effectively within their organization.
RAG, DORA and FedRAMP – what are they and how are they used to create GenAI solutions?
Melisa shared a variety of concepts that help Excella build high quality GenAI solutions. One of them is Retrieval-Augmented Generation (RAG), an architecture that often underpins many GenAI applications designed to generate relevant, clear, and easy-to-understand responses:
“RAG enhances the capabilities of AI models by incorporating information retrieval mechanisms. That’s why you start off with the retrieval component of retrieving information that is learned, augmenting it to customize it to a specific, relevant answer and then generating that answer.”
Melisa explained that Excella’s technologists have prioritized applying Google’s DevOps Research & Assessment (DORA) capabilities to assess performance at technology companies. Melisa noted that one of the key DORA capabilities that Excellians value is “frequent deployments,” which allows Excella and its clients to remain agile and deliver solutions— including GenAI solutions—quickly, while also adapting them as needed in lock step with client requirements. When the safety of GenAI tools is paramount, having an agile and adaptable deployment pipeline ensures that any changes or fixes are swiftly identified and implemented.
Melisa also emphasized that federal clients are often working with very sensitive, secured, or classified data sources. For this reason, Excella prioritizes protecting the information and ensuring the safety of the United States and its citizens by leveraging tools that meet the highest security standards. For example, Excella uses tools and services that often meet the highest security standards for its clients. Excella also routinely conducts tool evaluations in-house to ensure that the cloud service provider or the tools that technologists use are customized for any solution and tailored to it. That doesn’t just mean selecting the tool in the beginning – it means ensuring regular evaluation throughout, analyzing for accuracy, fairness, robustness and potential drift when a tool is deployed in a production environment.
Conclusion
Ensuring fairness, transparency, and security from the outset is crucial for GenAI technology’s positive impact, especially within the federal sector. Excella’s structured approach to GenAI serves as a model for organizations looking to adopt this powerful technology responsibly and effectively.