Anticipating 2026: Top Trends in Analytics - Issue 298
The key movements shaping analytics, the analyst role shift, and how to prepare for 2026
Here’s to another year. As a tradition by now, I’m starting with my take on the top trends in data and analytics - trends that affect my work, spark conversations in my network, or are simply important to watch.
Most of these were my talking points at recent Hex Magic Hours events in New York and San Francisco. If you missed them, this post will recap the key ideas.
Today, I want to cover 3 topics:
How the analyst role is shifting. We may like it or not, but our responsibilities are changing.
Which technology trends are evolving and which tools are becoming must-haves today.
How to prepare yourself and your team to
stay sanekeep a healthy workflow, and balance high-impact projects with work that takes time but creates noise.
From reporting to decision systems
If you’ve been in data for a while, you’ve seen a lot of trends, and a few distinct “eras.” Most of them were shaped by a new tool or technology hitting the market and changing how we work.
Long ago, I remember the Oracle era, when we believed you needed an Oracle certification to make it. Then came the NoSQL era, when new databases appeared quickly - Cassandra, Neo4j, Redis, MongoDB, and later DynamoDB. In just one to two years, between 2008 and 2010, the database landscape changed completely.
Then came the “back to SQL” moment with Redshift and BigQuery. After ~2016, there was an era of ELT - teams started loading everything into the warehouse and transforming with version-controlled SQL (how did we live without dbt before?). And after ~2020, product analytics and experimentation tooling matured into a whole ecosystem - event pipelines, session replay, feature flags, A/B testing platforms, “big data” to “small data” backlash, and new expectations for how fast teams should answer questions.
All of this matters because each mini-era quietly reshaped what it meant to be “good at data”, and the skillset analysts were expected to have. If we managed to adapt through all of that, what’s coming now (and especially after 2026) should be easier to navigate than the transitions we’ve already lived through.
If I had to pick milestone eras for analytics, for me, it would be:
2015: Reporting → 2020: Optimization → 2026: Decision
Reporting era: the data team’s job was to explain what already happened. Success meant accuracy, consistency, and speed.
Optimization era: the job shifted to improving outcomes. Success meant moving conversion, retention, and LTV (and now your data team knows what it means) and building experimentation muscle to do it reliably.
Decision infrastructure era: the job becomes powering decisions for both humans and AI. Success is measured by the number of decisions you can make that are fast, causal, and safe to automate.
And I don’t just mean our projects or team OKRs. I mean the direction the tools are taking — and how the data stack itself is evolving into
Another way to say it:
Explain the past → Improve metrics → Power automated, trustworthy decisions
Around 2015, most “modern” tooling was about getting data out and making it visible. That’s when BI went mainstream, with Tableau crossing 35K+ customers and the launch of Power BI. Amplitude also went through one of its fastest growth periods.
By 2020, the center of gravity moved toward warehouses and data transformation workflows, with the Snowflake IPO and dbt becoming mainstream. Experimentation platforms and feature flagging matured into expected standard practice.
And that brings us to now. The next generation of tools is being built on top of that foundation to automate decisions and connect the dots end-to-end. By now, we should have learned the hard lesson: more insights don’t matter unless you can act on them. In 2026 and beyond, data teams will win or lose on integrations and on their ability to maintain metadata, semantics, context, and quality guardrails so systems can operate without breaking trust.
Role shift: Analyst as Curator
The role of the analyst is shifting from producing and optimizing reports to curating context. Analysts are becoming “librarians” of the data stack: context-layer architects who ensure systems operate on correct definitions, logic, and metadata.
LLMs don’t struggle with SQL. They struggle with meaning. Context, semantics, definitions, and lineage are now the critical path, and that work largely falls on analytics. The expectation is that analysts build and maintain the context layer AI depends on: definitions, logic, guardrails, and interpretation.
In a way, not much is changing for us - we’ve always defined and maintained context: what “active” means, how we define a billing period, what “significant” means for our data, what’s expected threshold for alerts, etc. The difference is that it’s no longer just analysts relying on this context. Now the broader organization and automated systems depend on it too. And it’s our responsibility to figure out the right ways to package and feed that context into the stack.



