Rogo vs. Text-to-SQL Engines

Tl;dr: Large language models (LLMs) will change self-service analytics forever. But simple text-to-SQL engines are not how they’ll do it. These products are slow, non-auditable, and frequently inaccurate. Rogo’s intermediate representation (IR) is a technical differentiator that makes a huge difference.

Published on March 9, 2023 by John Willett

The clear winner… right?

The recent explosion of LLMs has rightly generated interest in a variety of problems related to generative AI and AI translation. Natural language search on tabular data has received a lot of attention. The most obvious and most popular way to apply LLMs here is by building end-to-end text-to-SQL models. These products look something like the following.

First a user types: “who were our top 10 customers in 2021?” Then the model generates a few lines of SQL that filter on 2021 orders, sum revenue by customers, sort the table, and return the top 10. Depending on the product, this SQL is either returned as text or is immediately executed against the data (shudder) with the results returned in platform or via Slack.

It’s an attractive picture. It can certainly deliver some value. And thanks to public APIs like GPT-3, anyone can spin up a decent text-to-SQL engine with some spare time on the weekend. (One need only spend a few days in a business intelligence forum on Reddit to see this firsthand.)

But if you look a bit closer, the picture loses some of its appeal. That’s what this blog post is about. Our view at Rogo is that these solutions miss the mark—and pretty widely. Among other things, they are often wrong, slow, and not auditable. Worse, for most users it’s hard to trust and have confidence in the results, since without the requisite technical training you won’t be able to know when they’re wrong and when they’re right.

Rogo’s approach

Rogo uses LLM technology too. It wouldn’t be possible without it. But rather than using an LLM to translate directly from natural language to SQL, Rogo uses an intermediate representation (IR). This basically amounts to an internal execution plan that is not strictly tied to SQL, Python, or any other language, framework, or system.

Critically, the IR provides a resting point between the natural language input and eventual output response—a halfway mark at which Rogo can decide what output will work best. That might be SQL, but it also might be pandas code in Python, a partly filled-out GUI menu, or a report.

As discussed below, the IR gives data teams that use Rogo very precise control over outputs, aiding both accuracy and usability.

It also enables Rogo to use a cocktail of parsing approaches (since they’re compatible so long as they map to the same intermediate representation). Some approaches are grammar-based, which means they’re lightning-fast, prescriptive in a good way, and able to provide live suggestions. And some use LLMs. These latter ones are higher latency but offer a huge amount of linguistic flexibility. (It’s worth pointing out that even when Rogo uses an LLM in this context, it’s still more reliable and auditable than text-to-SQL engines since it’s mapping to our IR.)

Rogo’s advantages over text-to-SQL engines

There are many. Here are a few.

Auditability for non-technical users. A data tool is only as good as the user’s ability to know when results are and are not what they’re looking for. If your user is a “person who doesn’t know SQL,” then returning raw SQL (even with a chart) is not an overwhelmingly helpful or elegant solution. They’ll likely need a technical person to check the result anyway, to make sure the query is using the right tables, columns, joins, and functions. (This concern can be ameliorated somewhat by delivering auto-generated natural language explanations along with the returned SQL, though transparency, accuracy, and reliability issues remain, discussed below.)

Accuracy. Siccing GPT-3 on your company’s CDW and handing it to a non-technical person results in a lot of wrong answers. This isn’t a theoretical concern. Text-to-SQL chatbots might work okay on Postgres databases with three tables, but a real organization’s data is big and messy. It has tables and columns with similar names, tons of date columns, and probably more than a few poorly named metrics. In this setting, text-to-SQL engines will make errors. They’ll join on random tables, use inappropriate time axes, and create random metrics on the fly that don’t make sense. And non-technical users will mistakenly trust the results. For self-service analytics to work, data teams need tight control over acceptable inputs and outputs. Intermediate representations don’t automatically solve the problem outright, but they afford the degree of control necessary to attack it.

Speed. Language models are often slow in production. Of course, if you train your own text-to-intermediate-rep model it might be just as slow as a text-to-SQL model. But with an intermediate rep, this isn’t the only parsing approach available. At Rogo we employ an ensemble approach that, alongside a translation model, includes term matching and a grammar. With terms indexed in a search-optimized database, there is at the least a rosy path for most queries that is much, much faster than direct translation. Nearly instantaneous. And the worst-case performance (i.e., when the user’s input is not amenable to the grammar) is no worse than a text-to-SQL model.

Flexibility of data and analyses. Text-to-SQL engines can query SQL databases, but what if some of the data your company routinely analyzes is delivered via third-party API? Rogo’s intermediate representation is output-agnostic. Many of our clients analyze their internal tabular data alongside data from third-party vendors like financial markets data providers. Because of the IR, they can do this all in our platform, all through the same search bar. That’s impossible with a text-to-SQL engine. Relatedly, even when you’re working with a SQL database, some of the things you want to search will not be SQL queries. Sometimes you want to use a custom function from Python, sometimes you want to wrangle the data via pandas, and sometimes you want to edit how your visualization is being displayed. With Rogo, this is a cinch. Our intermediate representation can map to any output under the sun. Over in the non-deterministic end-to-end world, meanwhile, all queries must map to SQL.

Guided, interactive, prescriptive search. One of the most important usability features of search is guided autocomplete, especially with foreign workflows. With an intermediate representation, teeing up lightning-fast, live autocomplete suggestions is easy and natural. But text-to-SQL platforms lack live, helpful query suggestions and completions. A successful search approach should also promote iterative exploration, instead of focusing on binary accuracy and providing "the answer." In Rogo, since what’s returned is essentially a GUI menu, it’s very easy to drill down, add new components to your query, and iterative refine your analysis all through search, even without technical experience. Lastly, search via an IR is prescriptive in a good way, in that it gives non-technical users a digestible, predictable, repeatable, and intuitive language they can quickly master and then use to reliably pull data, add filters, look at breakdowns, and so on. Giving these users a blank screen, a text box, and a black-box model is not a recipe for success in this context.

Adaptability. User needs in the world of data analytics are highly varied, making control & adaptability must-haves. Rogo’s parsing approach gives us absolute control over acceptable inputs and outputs in our system. If a client needs a specific type of query to be accessible (or inaccessible), or their stakeholders need to be able to ask for it in a specific type of way, it’s a cinch for us to accommodate that request. As a playful example, if a data team told Rogo that their PMs like to wake up every day and ask things like, “What’s up today in Boston?”, we could easily map that query and ones like it to the desired output. For the end-to-end folks, meanwhile, making these query-level changes is impossible. Their approach to parsing like strapping your car to another engine to get a quick speed boost but losing control of steering in the process.


The most obvious application of LLMs to data analytics is toward end-to-end text-to-SQL engines. But in this case, the most obvious application is the wrong one. Text-to-SQL engines are slow, unreliable, non-auditable, frequently inaccurate, not guiding, and inflexible in terms of inputs, outputs, and data sources. There are many valid debates to be had around what the eventual natural language interface for data will look like. But we would argue the following (with the appropriate degree of modesty and open-mindedness): the winning natural language interface for data will necessarily use an intermediate representation. And text-to-SQL engines, as currently scoped, will be sidelined as productivity tools for data analysts—like Copilot for SQL.

That being said, the creators of this latest iteration of text-to-SQL engines are right about one thing: LLMs like GPT-3 are going to change self-service data analytics forever. They’ll improve not only parsing but data ingestion, knowledge engineering, metric creation, and countless other elements of the data workflow. A platform like Rogo—with AI-enabled parsing, a full search suite, smart suggestions, and the beginnings of an automated data ingestion pipeline—would not be possible without LLMs.

To be clear, we’re not dogmatic about intermediate representations. We also have our own text-to-SQL feature that translates directly to SQL, totally bypassing our IR. But that’s just what it is: a feature. Not a platform. It provides a great floor to our functionality as we continue to make our intermediate representation more robust. If Rogo can’t directly answer your question, or you’re a technical data analyst who wants the SQL, it’s there for you. But the goal is to iteratively improve Rogo such that fewer and fewer queries from non-technical users need to go this route.

Sign Up

Rogo is super low effort for your team to set up. If you or your team wants to get more out of your data, sign up below. Get up and running in five minutes, and make everyone in your company smarter.

2023© Rogo Technologies, All Rights Reserved.

Privacy Policy & Terms of Use