Why does there have to be a tradeoff between optimizing a DB for transactional data and optimizing a DB for analytical data?


I’m a data scientist who’s more on the math and modeling side of things than the engineering side, and I don’t know much about the logic, data structures and design principles that underly database design, beyond various platitudes like “Use a relational DB for structured data and a NoSql DB for unstructured data”, etc…

I often hear that you have to choose whether a db is going to be used for transactional data that gets frequent updates and requires low latency, or wether it is destined to be more of an analytical database that gets occasional large batch updates and where latency is not a constraint. It is usually implied that there is some tradeoff that has to be decided between going with one or the other?

What I don’t understand is why is this tradeoff either/or choice inevitable? I’ve worked with OLAP cubes (true OLAP platforms, not ROLAP) that could handle near realtime updates and commits to the main data cube from users, as well as batch processing for some of the larger modeling tasks, and could also do realtime slicing and dicing, and aggregation/disaggregation on the fly of various metrics based on predefined aggregation rules and hierarchies. If such a system is possible (the platforms I worked with have been around since the mid-2000s), why is a backend that can handle both transactional data and analytical data such a difficult to attain objective?