Konuşmacılar
2 veya daha fazla kişiyi mi eğitiyorsunuz?
Ekibinizin merkezi raporlama, ödevler, projeler ve daha fazlasını içeren eksiksiz DataCamp kütüphanesine erişmesini sağlayınLeading AI Automation at Scale
April 2026
Slides
Form100 Consulting
Summary
A practical session for business leaders, managers, and anyone responsible for driving AI automation in their organization, featuring Nate Amidon, president of Form 100 Consulting and former US Air Force C-17 pilot.
AI automation is no longer optional at most organizations. CEOs across manufacturing, services, and consumer goods are pushing for it from the top, but the pressure lands on the middle layer: directors, VPs, senior managers, and program managers who are expected to deliver without additional resources or clear direction. This session addresses how to get unstuck.
Amidon's core argument is simple: the technology is not the hard part. Tools capable of meaningful AI automation already exist inside most enterprise environments — Microsoft 365 Copilot agents, ChatGPT custom GPTs, Gemini Gems — and none of them require an engineering background to build. What most organizations lack is a process for implementing automation in a way that creates real value, builds team confidence, and holds up over time.
Amidon walks through a four-principle framework developed through hands-on client work across industries: start with a human-first approach, define value before building, implement incrementally, and design for sustainability. Together, these principles shift the focus from chasing the most sophisticated automation to building the organizational muscle to improve continuously. Watch the full webinar for the complete presentation, including a real client case study and audience Q&A on AI adoption and change management.
Key Takeaways
- You don't need to be an engineer to implement AI automation. Lightweight tools like Microsoft 365 Copilot agents already exist in most enterprise environments and take minutes to build and share across a team.
- How you implement AI automation matters more than what you automate. Building a repeatable process positions your organization to capture new capabilities as they emerge, rather than constantly starting over.
- Every automation should meet a clear value definition: save time, improve quality, or both. It should not create bottlenecks elsewhere in the workflow. Automating a step in isolation without mapping the full process often moves the problem rather than fixing it.
- Start with one high-value, low-complexity step and ship that before moving on. Incremental rollout builds team confidence, tightens feedback loops, and avoids building something too complex to maintain.
- Automations are software products. They need an owner, documentation, and regular review. An outdated agent feeding stale information can cause more harm than no automation at all.
Deep Dives
The Lightweight AI Automation Tools Already in Your Organization
One of the most common barriers to AI automation is the assumption that it requires engineering expertise. Amidon's opening point is direct: capable tools already exist inside most enterprise environments, and most teams don't know they're there.
For organizations running Microsoft 365, roughly 70% of Fortune 500 companies, the starting point is the Copilot agent builder inside Teams. Go to Copilot, open the agents panel, click create. From there, you write instructions describing what you want the agent to do, load relevant documents or websites as source material, and decide whether the agent draws from the internet or stays limited to your uploaded content only. The process takes minutes.
Amidon categorizes the agents he builds for clients into two types. An information agent acts as a searchable chatbot over a defined body of documentation — useful when teams need fast, consistent answers from a large internal knowledge base. A process agent is trained on a specific workflow step and allowed to use broader sources to assist with that task. "You can build an agent yourself, and then you can share it out with other people," Amidon said. That sharing function is what makes these tools worth deploying at scale. One well-built agent standardizes how an entire team approaches a recurring task.
For teams not on Microsoft, the same concept applies. ChatGPT's custom GPTs and Gemini Gems work the same way. In each case, the underlying principle is pre-loading a model with context and process knowledge, then distributing that to people who don't hold that expertise themselves.
What these tools are not, Amidon is clear, is a set-it-and-forget-it solution. They don't connect to back-end systems automatically. They're still language models and will make mistakes. They require monitoring and maintenance. But for teams who assumed AI automation required a major technical lift, these are the tools that lower the barrier enough to start. Watch the full session for a live walkthrough of agent creation inside Microsoft 365 Copilot.
Why How You Automate Matters More Than What You Automate
"I believe that how you implement is more important than what you implement," Amidon said. "Because the capabilities are advancing so quickly, how you do this culturally is going to matter."
The reasoning is direct. AI tools are improving fast enough that what is difficult to automate today may be simple in twelve months. Organizations that chase the most sophisticated available automation will find themselves constantly behind. Organizations that build a structured, repeatable process for identifying, testing, and deploying automations will be ready to absorb new capabilities the moment they arrive.
This is what Amidon means by building organizational muscle. The goal of early automation work is not just the automation itself. It is the team's growing confidence with the process of automating: how to spot a good candidate, how to define value, how to test before full rollout, how to document and maintain. Every small automation done well teaches the team something they'll use on the next one.
Amidon drew a parallel to agile software development, which he has applied at Boeing, Alaska Airlines, and Microsoft. The same principles — incremental delivery, tight feedback loops, continuous improvement — apply to building an AI automation program. You iterate, you learn, you get better.
The four principles he outlines formalize this into a framework: start with a human-first AI approach, require that every automation meet a clear value definition, implement one step at a time, and build for sustainability from the start. Each principle targets a specific failure mode: team resistance, misdirected effort, over-complex builds that never ship, and automations that degrade without anyone noticing. The case study later in the session makes this concrete. The product management team Amidon's firm worked with left with four well-scoped agents, thirty hours of weekly time savings, and a team that knows how to think about AI adoption when the next capability arrives.
Defining Value Before You Build: The AI Automation Strategy Mistake Most Teams Make
One of the most underappreciated steps in any AI automation strategy is the one that happens before anything is built: defining what value means.
Most organizations default to vague targets. "We'll improve efficiency by 15% this year," as Amidon described it. The problem is: 15% of what? Without a clear definition, automation work drifts toward whatever seems impressive rather than whatever solves a real problem.
Amidon's test has two parts. An automation is worth building if it saves people time or improves quality. And it must not create bottlenecks elsewhere. That last condition is the one most teams miss. "You automate your step, and you think, this is great. It's hands-off, I can do this so fast. And then you basically just push a bow wave of AI output to the next person down the value stream." One team's efficiency gain becomes the next team's pile-up.
Amidon connected this to manufacturing theory, specifically the book "The Goal" by Eliyahu Goldratt, recommended to him by a client in manufacturing. Written in the early 1980s about factory automation, it documented how optimizing individual steps in a production line often made overall output worse. The parallels to AI automation are direct. Speed up one step in a business process without mapping the full workflow, and you move the bottleneck rather than remove it. Lean Six Sigma and theory of constraints thinking, Amidon argued, will be highly valuable skills over the next few years.
The recommendation: map the process before you touch it. Make the full workflow visible. Identify where the real constraints are. Then ask whether the thing you are about to automate needs to exist at all. Amidon cited the same point Elon Musk has made: "Don't optimize something that shouldn't exist." Map it, simplify it, then automate it.
For teams ready to start, Amidon recommends using an agent canvas — a lightweight document that captures the problem statement, success criteria, business value, and scope. Treating automation projects like software products, with a defined purpose and defined boundaries, reduces the risk of building something that creates more work than it saves.
Incremental AI Implementation: The Case Against Jumping to Jarvis
When organizations decide to pursue AI automation, the instinct is to go big. Amidon hears this from clients regularly: they arrive with complex, multi-system processes and want them fully automated. "You want Jarvis," he told them, referencing Iron Man's AI. "That's not where we're at."
The case for incremental AI implementation goes beyond managing technical complexity. It covers change management, feedback quality, and organizational readiness — all of which take time to develop.
The method is straightforward: map the process, find one step that is both high value and easy to automate, and do only that one step. Once the team has used it, gathered feedback, and gotten comfortable, move to the next. "The process of doing it incrementally is more important than how much value you get from that one step," Amidon said.
A tighter feedback loop is one of the real benefits. How people use an automation in practice rarely matches how it was designed. Amidon's case study makes this plain: the first user they gave the agent to used it in an entirely different way than expected. They updated it, then rolled it out. That kind of discovery is only possible when you test with one person before scaling to a team.
For pilot testing, Amidon recommends starting with your most competent team member in that area. They are the busiest, the most motivated to save time, and the best positioned to spot unintended consequences. They also carry credibility. When the person the team respects adopts and endorses the tool, broader AI adoption follows with less friction.
The expected outcome of incremental implementation is 30% automation of a given process with current tools, not 100%. That is not a failure. "While you're waiting, you're still getting the value of 30% automation." When the tools improve — when connecting agents to back-end systems becomes easier — you already know what to automate next, because you've done the process mapping and your team has the confidence to move fast.
Sustainability: Why AI Automations Fail After They Ship
Most discussions of AI automation focus on building. Far fewer address what happens afterward: whether the automation continues to work, stays accurate, and remains relevant as business needs shift. Amidon considers this the most underaddressed challenge in the field.
The problem is predictable. An automation built on documentation that gets updated, processes that evolve, or team members who move on will degrade over time. Outdated agents don't just become useless. They become harmful. "They not only could become not valuable, they could become negative value," Amidon said. "They can make things worse." An agent feeding outdated information with the confidence of a current one is a failure that is easy to miss until it causes real damage.
Amidon's sustainment framework has four components. The first is the agent canvas: a document that captures what the automation is for, how it was built, and why the instructions are written the way they are. This serves as both a quality gate — teams should not distribute automations across an organization without doing this level of thinking — and as a maintenance reference when the original builder moves on.
The second is ownership tracking. Every automation needs a named owner responsible for updates. If that person leaves, someone else needs to know the automation exists and where the documentation lives.
The third is a user guide, built with the help of AI. It does not need to be long — just enough to explain the agent's purpose, its scope, and how to use it correctly.
The fourth is a regular validation check, run quarterly or every six months. Verify the automation still works as intended, that the source material is current, and that the use case has not shifted.
Treat automations like software products. They get built, deployed, maintained, and eventually retired. Organizations that treat them as permanent infrastructure will find themselves relying on things that no longer do what they think they do. Build that lifecycle in from day one, and the sustainability problem stays manageable. Watch the full webinar to see how Amidon's firm applied this framework with a real product management team.
