Focused on delivery:
We embedded with a global multi-strategy investment firm's engineering team to build the data infrastructure, agents, and evaluation systems powering their AI research platform
Focused on results:
Analysts gained access to over 1,000 synced research notes with a read-through agent capable of identifying market-wide impact from company-level events, and a reliable extraction pipeline backed by custom evals.
Focused on partnership:
We worked inside the firm’s existing architecture and tooling choices, contributing as embedded experts rather than outside vendors pushing a prescribed approach.
Built to compete at the highest levels of modern finance, this global multi-strategy investment firm combines deep fundamental research, quantitative methods, and artificial intelligence to generate alpha across global markets. Their team operates at the intersection of finance and technology, with a deliberate commitment to building proprietary infrastructure rather than relying on off-the-shelf solutions.
When Focused partnered with the firm, their engineering team was building an AI-powered research platform designed to augment analysts, making research faster, more connected, and more actionable. The ambition was real. So were the infrastructure challenges standing between that ambition and production.
The firm’s research platform was built entirely in-house. No third-party AI frameworks, no off-the-shelf orchestration layers, a deliberate architectural choice that reflected both the firm's technical sophistication and its need for control in a high-stakes financial environment.
That homegrown foundation was a strength. It was also the source of the platform's hardest problems.
Analyst knowledge lived in Microsoft OneNote notebooks, which contained years of meeting notes, company research, and transcripts. Agents were being built, but their outputs weren't reliably shaped for the structured, reviewable format analysts needed. And without a formal evaluation framework, improving agent quality meant guessing.
The platform had real momentum. What it needed was the infrastructure and engineering depth to make that momentum durable.
Focused embedded directly with the firm’s engineering team, operating as hands-on technical partners rather than an external delivery group. We contributed to architecture, agent development, infrastructure, and evals, while respecting and working within the firm’s existing tooling and conventions.
Our role was not to introduce a new framework or redirect the team's technical approach. It was to go deep on the hardest problems and bring the engineering expertise to solve them.
Technical Approach: Three Integration Layers
Enterprise AI research platforms are hard because of everything that surrounds them: the data infrastructure, the system integrations, the feedback loops that let domain experts shape what the AI actually produces.
At the firm, Focused worked across all three layers.
Data Layer: Making Research Knowledge Accessible
The most foundational challenge was data. Analysts had accumulated extensive research in Microsoft OneNote with notebooks full of meeting notes, company research, transcripts, and commentary. None of it was accessible to the AI platform.
Focused built the OneNote Sync system to change that.
The Microsoft Graph API imposed aggressive rate limits, both per-minute and per-hour. With the substantial amount of content being synced, the existing architecture couldn't handle the load. We reworked the database layer, rewrote key parts of the UI, and designed a double-nested ingestion system using Inngest, a serverless, event-driven function architecture, that respected both tiers of Microsoft's rate limits simultaneously without a native two-tier limit system to rely on.
The result: analysts could connect their Microsoft account, select a notebook, and sync their research directly into the platform. Over 1,000 OneNote notes entered the system, making research that had previously existed only in personal notebooks now accessible for AI-assisted analysis.
Focused also built Spinnaker, a structured data layer that gave analysts access to historical company data for forecasting and assessment. It’s the kind of spreadsheet-native, data-intensive work that sits at the core of fundamental research.
Application Layer: Agents That Work in the Real World
With data accessible, Focused contributed to the agents designed to reason over it.
The Read Through Agent was built to identify how events affecting one company might ripple across similar companies in the same market segment, a meaningful research acceleration for analysts tracking correlated positions. The agent was built using the Vercel AI SDK, with traces captured in Laminar for observability into agent execution.
The core engineering challenge was output reliability. The agent needed to return structured data in a specific JSON shape that downstream systems could consume. Getting there required systematic debugging. Where were structured output constraints breaking? How should the prompt be adjusted to produce a valid schema? How do you handle edge cases without degrading response quality? These are the questions Focused worked through methodically, turning an agent that produced plausible-looking output into one that produced dependable, correctly shaped output every time.
Focused also worked on the company notes extraction agent, which pulled relevant company mentions from analyst research. The challenge here was specificity, which meant keeping the output focused on substantive company analysis and excluding noise like incidental advertising mentions.
Human Layer: Evals as the Feedback Loop
In a financial research context, "good enough" isn't a standard. Analysts are experts. They know immediately when an agent's output doesn't tell them what they need to know. Without a formal feedback mechanism, that judgment stays in someone's head rather than improving the system.
Focused introduced a custom evaluation framework to close that loop.
For the company notes extraction agent, we built custom evals around prompt adjustments, defining expected output, running the agent against known inputs, and iterating on the prompt until results were reliably accurate. By measuring against specific, expected outputs rather than probabilistic signals, we had a precise mechanism for knowing when the agent was working and when it wasn't.
The same discipline applied to the Read Through Agent. Output was structured to make human review fast and legible. The question driving design was always whether the output told analysts what they actually needed to know. Evals created the mechanism to answer that question systematically rather than by feel.
The Takeaway
Financial services is one of the most demanding environments for AI systems. Data is proprietary. Outputs carry real stakes. Analysts are domain experts who will immediately recognize when a system isn't working at their level.
The firm built their stack on purpose. Working inside that environment, without the scaffolding of standard frameworks, inside a codebase built entirely in-house required Focused to operate as true engineering partners, not vendors with a preferred toolset.
The result was production-grade work across data infrastructure, agent development, and evaluation design: the same integrated approach Focused brings to every engagement, regardless of stack. Data that's reachable. Agents that integrate cleanly. Humans who can trust and correct what the system produces.
The framework doesn't matter. The integration depth does.
Your systems are ready. Your agent isn't. Let's fix that.
We're the experts in agents and integrations. We'll tell you if we can help.
.png)
