The Daedalus Approach to LLMs

Why read this?

You want to understand how to use LLMs effectively in real product work not just in theory.

Everyone’s experimenting, but you need a view from the field to know what actually delivers value.

Tesh Srivastava

June 16, 2025

7

min read

Tools Don’t Think. People Do.

At Daedalus we treat large-language models exactly as they deserve to be treated: as tools. Powerful, fast, occasionally dazzling, but tools nonetheless. 

Whenever a project hits the slog of repetitive extraction or refactoring, we hand those low-stakes chores to the machine and free our senior engineers to tackle the parts where judgment and experience still rule. Picture our data-monetisation work: the model can harvest and label raw behavioural signals at speed, yet as soon as we need to translate those signals into risk-weighted tables that map each user to the right financial product, the keyboard passes back to a human who understands markets, regulation and consequence.

We also rely on LLMs as cartographers. Before we sketch a single wireframe, we ask the model to roam the terrain; survey a new vertical, cluster emergent personas, or surface how a buyer in one segment differs from a user in another. The output is rarely finished wisdom, but it is a serviceable map that lets us plan the first steps of development in hours rather than weeks. The same scouting instinct powers our commercial research. Years ago we pored over earnings calls and sector reports as VC analysts; today an LLM does the first pass. Because we have spent careers recognising what “good” analysis looks like, we know when to trust its summary and when to dig deeper.


Clear Boundaries, Smart Stack

Where we draw the line is data privacy. We do not feed sensitive client information into public models, full stop. Until the prevailing architectures can guarantee isolation and auditability, confidential work remains a human-only affair.

Inside that boundary our stack stays fluid:

  • GPT for general reasoning
  • GitHub Copilot for inline code suggestions
  • Perplexity for rapid literature sweeps

At the top end of the market, the models leapfrog each other almost daily and it’s important to keep abreast of what’s currently best-in-class and adapt your usage accordingly.


AI Elevates Expertise, It Doesn’t Replace It

The broader lesson is one that low- and no-code veterans already learned.

A slick interface can lower barriers, but it cannot erase the need for structured thinking. LLMs elevate great engineers to near-mythic productivity; they do not turn poor engineers into competent ones.

Unknown unknowns still lurk in every build, and only people who grasp the shape of the problem can spot when the machine’s answer rings hollow. Stripe did not eliminate the complexity of payments - it wrapped that complexity in a product layer. LLMs do the same for artificial intelligence: indispensable, transformative, yet ultimately subordinate to the expertise that wields them. At Daedalus that expertise is the point. The machine handles the grunt work; we stay responsible for the hard decision - just as it should be.

Did you like this article? Share it!

Let's continue the conversation

Get instant access to this article
plus more insights from the Daedalus team