When AI Becomes Both a Bridge and a Backbone: An Engagement Manager Perspective
- Patty Greer
- Jul 8
- 5 min read

As an Engagement Manager at Accelerated Focus — a role that blends project management, business analysis, agile coaching, and product strategy — I led a high-impact project in the State & Local government sector. The initiative aimed to modernize document processes and embed AI in both historical data migration and forward-looking business operations. The challenge: migrate over 20,000 PDF documents — ranging from 1978 to 2024 — into a new digital system and embed AI into a live application that would manage those documents in real time. This wasn’t a simple data migration. It was a transformation of how the agency accesses, understands, and interacts with its information.
Two Goals, One AI Engine
The project had two tightly coupled objectives:
Build a business application to manage document-related workflows from start to finish.
Enrich and migrate 20,000 legacy documents into that system with searchable, structured metadata.
Manual review of the documents wasn’t feasible at scale for migration because it would have added months of manual effort. The documents came in every format imaginable — typewritten, scanned, handwritten, or born-digital. So, we used LLMs (Large Language Models), and indexed data to aid the Product Owner in identifying and extracting eight metadata fields (including “date” and “document number”), transforming static files into actionable, searchable records.
But the work didn’t stop there. The same AI model also powered real-time similarity search and filtering in the new application. This dual-use setup—migration and runtime—meant the models had to be correct, scalable, and user-friendly across both contexts.
Lessons Learned
1. Scope Is a Living Thing
We entered the project with a defined objective: to extract eight properties from 20,000 PDFs and make them searchable. But as soon as we began processing files from different eras, the plan had to adapt. Layouts varied dramatically, text was inconsistently formatted, and scanning quality created ambiguity.
Takeaway: In AI projects, requirements shift as your understanding of the data matures. Flexibility isn’t a luxury; it’s necessary. Scope evolves with data understanding. Our planning had to account for continuous discovery and refinement, not just in sprint retros, but often mid-sprint. For the tech leads reading this, this translates to iteratively grounding the models with the right quality data.
2. Data is Not Just Fuel — It’s the Roadmap
We started with a sample of 10 documents to guide the model prompt and property selection. While this helped define the initial fields, it wasn’t enough to represent the full dataset. Later, as we scaled up, we encountered handwritten notes, multi-date documents, and unexpected content structures.

Takeaway: Sample broadly and early. What you see in 10 files may not prepare you for what’s in 20,000. Assess the client’s data as early as possible. In this case, with the government data, samplings started small. If you start small, realize that you will eventually need to grow the data set.
3. EMs Must Bridge Architecture and Execution
Our team was lean: an architect and 2 developers. With no data scientists or data analysts, I often played the role of translator — ensuring architectural designs mapped to operational reality. For instance, while the architect focused on scalability, developers were dealing with how to parse handwritten dates in unpredictable places. Regularly navigating these messy edge cases became a crucial part of my role.
Takeaway: Engagement Managers at Accelerated Focus on AI projects are the alignment agents — and the risks and impact of edge cases need to be accounted for during estimating — balancing vision with implementation constraints.
4. Usability Begins with the UI

Extracting data is one thing. Making it intuitive for users is another. We worked closely with the Product Owner and UI team to define how fields like “document date” should be displayed. That collaboration shaped how prompts were tuned and how post-processing logic worked.
Takeaway Example: Early mockups revealed the need for a single, consistently labeled “document date,” even when multiple dates existed in a file. Align early, bring usability planning upstream – mockups served not just as design tools, but as working prototypes to confirm how users would interact with each extracted property.
5. Accuracy is a Spectrum, not a Threshold
This was the first AI project with this client, and building the client's trust and confidence was done throughout the AI-driven property extraction processes. There were no historical benchmarks, just a dataset of 20,000 PDFs spanning decades, with varying levels of structure, clarity, and consistency. That meant accuracy was a moving target. We began with no idea what level of extraction accuracy we could achieve. As an example, while one “date” performed reasonably well, “document number” varied wildly due to inconsistent labeling across departments and decades.
Takeaway: In AI-driven extraction work, assume you’ll be learning in real time. Don’t overcommit to arbitrary accuracy goals. Instead, focus on understanding which fields matter most to end users and improve functional reliability, not perfection. Focus energy on what matters most to the user experience.
6. AI Indexing isn’t magic - It Needs Guardrails
The AI index does not manage complexity out of the box. Some documents were structured; others were scanned with annotations, stamps, or even handwritten notes. In one case, a critical date was scribbled in the margin, and the index simply couldn’t interpret it. Instead of being a helpful identifier, it became a source of noise. We responded by creating fallback logic and human-in-loop workflows for low-confidence outputs.

Takeaway: For future projects, the key takeaway is that when we can, creating tiered document categories (e.g., high-confidence structured docs vs. unpredictable layouts), Training models based on document type, and allowing for human-in-the-loop review where confidence is lower is part of the plan.
7. AI Doesn’t End at Launch
The models didn’t just serve the migration — they became part of the application’s real-time functionality. This meant the AI models we built had to be dependable, scalable in real-time, continuously learning from user behavior, and capable of serving the search and filtering in runtime.
Takeaway: If AI is embedded in your product, it needs a lifecycle plan like any other system component.
What I will do Differently Next Time
Expand Initial Sampling; Test on a broader, representative set of documents upfront to reveal complexities early.
Prioritize UI Alignment; Use UI mockups early — not just for design — but to align business expectations clearly with AI model capabilities.
Set Meaningful Metrics; Establish usability and impact metrics from the start, with clear processes for a human review rather than focusing solely on technical model precision.
Allow Time for Iteration; Schedule deliberate, ongoing cycles of prompt refinement and team learning to adapt quickly to evolving data challenges.
Plan for Long-Term AI Ownership; Define post-launch roles and ongoing support clearly, anticipating continuous model refinements and user-driven improvements.
Final Reflection

This project showed me that AI is not a plug-and-play solution. It’s a collaborative, evolving system that requires attention to data quality, user needs, and ongoing adaptability. Managing it successfully means not just keeping timelines and deliverables on track but guiding a team through complexity with clarity and intent.
For Engagement Managers stepping into AI projects, the role isn’t just about managing scope or building the perfect system, but more about aligning people, data, and outcomes around what’s truly useful.
Comments