Building a News Fact-Checker AI Agent
Key Takeaways:- Learn how to create a simple AI agent for fact-checking news articles.
- Discover how to combine LLMs, RAG, and API calls to build powerful AI applications.
- Gain insights into testing and evaluating the quality of AI tools.
Description
Building AI agents that can fact-check news articles is a powerful way to combat misinformation and provide users with reliable information. Combining large language models (LLMs), Retrieval-Augmented Generation (RAG), and API integrations allows AI engineers to create robust applications that validate news content against credible sources. Mastering these techniques equips you with the skills to build intelligent, trustworthy AI systems.
In this hands-on code-along session, Jon Bennion, Manager of Applied Machine Learning at The Objective AI, will guide you through building a news fact-checker AI agent from scratch. You’ll learn how to create a simple AI agent, combine LLMs with RAG and API calls, and test and evaluate your application’s performance. The session will also cover best practices for ensuring the quality and accuracy of your AI agent, providing you with practical experience and valuable insights. This webinar is perfect for AI engineers looking to build impactful AI tools with real-world applications.
Presenter Bio

Jon is a contract AI developer for multinational companies. He is an expert in rapid prototyping and deployment of AI and ML tools. previously, he worked on red-teaming LLMs as Program Director for Applied Science, Test and Evaluation at HumaneIntelligence. Before that, Jon was a Data Scientist at Amazon, Google, Facebook, Electronic Arts, and Disney. He is a contributor to LangChain and DeepEval. Jon is the instructor for the 'Developing LLM Applications with LangChain' DataCamp course.