On org design

Most analytics dysfunction traces back to org design, not talent. I’ve seen this consistently enough that it’s become the first place I look when a data function isn’t performing.

The specific failure mode I’ve had to unwind more than once is organizing analytical resources around business pillars, where each function controls its own dedicated analytics capacity. It seems clean on paper. In practice, it means the people with the most context on what the business actually needs, usually product managers and commercial leads, have the least ability to act on it. The analysts are locked inside silos controlled by people who can see one part of the problem.

The fix isn’t a better coordination process. It’s changing who controls resources and what they can see from that position.

The organizations I’ve built tend to share a few properties: analytical capacity that can be directed across functional boundaries, a decision-making cadence with real authority built into it, and an architecture where each new capability costs less to build than the last one.

On AI

My view on AI strategy is shaped by watching what actually works in production versus what works in a pilot.

The organizations doing well in AI right now are not necessarily the ones with the best models. They’re the ones that built the infrastructure to govern AI at scale: access controls, monitoring, accountability for when outputs are wrong, and a framework that doesn’t need to be rebuilt from scratch for every new use case.

Most enterprise AI investment is still allocated as if everything is exploratory. That’s why most of it doesn’t compound. The discipline shift that matters is treating the majority of AI investment with the rigor you’d apply to any production system: clear success metrics, cost attribution, defined ownership, and a feedback loop that improves the next deployment.

I’m more interested in the governance layer than the model selection question. The model question resolves itself relatively quickly. The governance layer is what determines whether AI stays with the team that built it or actually scales across the organization.

On vendor management

The instinct in vendor negotiations is to protect leverage by withholding information. In my experience that usually produces worse outcomes.

Vendors have more flexibility than just unit price. They can adjust term structure, ramp commitments, ecosystem access, and roadmap alignment. Most of that flexibility only becomes visible when you give them real information about where you’re headed and why.

That doesn’t mean entering negotiations without discipline. It means being transparent about your roadmap and growth trajectory in exchange for terms that reflect a genuine partnership rather than a one-sided contract.

On team size

I’ve managed the tension between large, specialized teams and lean, high-leverage ones long enough to have a clear view.

The economics favor lean teams in most contexts where speed and adaptability matter. This isn’t an argument for understaffing. It’s an argument for being honest about when additional headcount adds capability versus when it adds coordination overhead.

A focused team with real mandate, good data access, and clear accountability will outperform a larger, matrixed organization more often than most leaders expect. I’ve seen it go both ways and I’m skeptical of scaling team size ahead of structural readiness.


More on these topics in the Insights section.