The design of Pandas is inferior in every way to Polars: API, memory use, speed, expressiveness. Pandas has been strictly worse since late 2023 and will never close the gap. Polars is multithreaded by default, written in a low-level language, has a powerful query engine, supports lazy, out-of memory execution, and isn’t constrained by any compatibility concerns with a warty, eager-only API and pre-Arrow data types that aren’t nullable.
It’s probably not worth incurring the pain of a compatibility-breaking Pandas upgrade. Switch to Polars instead for new projects and you won’t look back.
Pandas deserves a ton of respect in my opinion. I built my career on knowing it well and using it daily for a decade, so I’m biased.
Pandas created the modern Python data stack when there was not really any alternatives (except R and closed source). The original split-apply-combine paradigm was well thought out, simple, and effective, and the built in tools to read pretty much anything (including all of your awful csv files and excel tables) and deal with timestamps easily made it fit into tons of workflows. It pioneered a lot, and basically still serves as the foundation and common format for the industry.
I always recommend every member of my teams read Modern Pandas by Tom Augspurger when they start, as it covers all the modern concepts you need to get data work done fast and with high quality. The concepts carry over to polars.
And I have to thank the pandas team for being a very open and collaborative bunch. They’re humble and smart people, and every PR or issue I’ve interacted with them on has been great.
Polars is undeniably great software, it’s my standard tool today. But they did benefit from the failures and hard edges of pandas, pyspark, dask, the tidyverse, and xarray. It’s an advantage pandas didn’t have, and they still pay for.
I’m not trying to take away from polars at all. It’s damn fast — the benchmarks are hard to beat. I’ve been working on my own library and basically every optimization I can think of is already implemented in polars.
I do have a concern with their VC funding/commercialization with cloud. The core library is MIT licensed, but knowing they’ll always have this feauture wall when you want to scale is not ideal. I think it limits the future of the library a lot, and I think long term someone will fill that niche and the users will leave.
Historically 18 years ago, Pandas started as a project by someone working in finance to use Python instead of Excel, yet be nicer than using just raw Python dicts and Numpy arrays.
For better or worse, like Excel and like the simpler programming languages of old, Pandas lets you overwrite data in place.
Polars comes from a more modern data engineering philosopy, and data is immutable. In Polars, if you ever wanted to do such a thing, you'd write a pipeline to process and replace the whole column.
If you are just interactively playing around with your data, and want to do it in Python and not in Excel or R, Pandas might still hit the spot. Or use Polars, and if need be then temporarily convert the data to Pandas or even to a Numpy array, manipulate, and then convert back.
P.S. Polars has an optimization to overwite a single value
The Polars code puts me off as being too verbose and requiring too many steps. I love the broadcasting ability that Pandas gets from Numpy. It's what sceintific computing should look like in my opinon. Maybe R, Julia or some array-based language does it a bit better than Numpy/Pandas, but it's certainly not like the Polars example.
Polars is indeed more verbose when coming from pandas, but in my experience it is an advantage for when you're reading that same code after not having touched it for months.
pandas is write-optimized, so you can quickly and powerfully transform your data. Once you're used to it, it allows you to quickly get your work done. But figuring out what is happening in that code after returning to it a while later is a lot harder compared to Polars, which is more read-optimized. This read-optimized API coincidentally allows the engine to perform more optimizations because all implicit knowledge about data must be typed out instead of kept in your head.
I don't agree that more verbose code is necessarily more readable when the shorter code looks like familiar math. All you have to do is learn how operators broadcast across array-like structures, how slicing and filtering works. Perhaps with more complicated examples the shorter code becomes harder to read after months away? Mathematicians are able to handle a lot of compact equations.
No doubt some of this comes down to preference as to what's considered readable. I never really bought that argument that regular expressions create more problems than they're worth. Perhaps I side on the expressivity end of the readability debate.
Oh I don't mean to say verbose makes it more readable by default, I agree with you on that. I mostly meant that because the API is declarative (more geared at describing the result you want instead of the operations) it is easier to understand what's going on. A side effect of that is that it might be more verbose, which is the case of Polars vs pandas.
In the end it's a personal thing which one you like the most. I do believe that if your deliverable is insights you get out of your analysis I can imagine that a less verbose API is practical to get things done quickly. But if you create pipelines that your colleagues have to quickly understand (or you in a couple of months) a read-optimized one makes more sense, even though it might take slightly more effort to write.
Likewise, I was considering trying Polaris until I saw that example. The pandas example is a good approximation of how I think and want to transform/process data even if it is ugly under the hood. I do occasionally find numpy and pandas annoying wrt when the return a view vs a copy but the cure seems worse than the disease.
"If I have seen further, it is by standing on the shoulders of giants" - Isaac Newton
Polars is great, but it is better precisely because it learned from all the mistakes of Pandas. Don't besmirch the latter just because it now has to deal with the backwards compatibility of those mistakes, because when it first started, it was revolutionary.
Can one criticize pandas by comparing to R's native DataFrames that have existed since R's inception in the 90s?
I (and many others) hated Pandas long before Polars was a thing. The main problem is that it's a DSL that doesn't really work well with the rest of Python (that and multi-index is awful outside of the original financial setting). If you're doing pure data science work it doesn't really come up, but as soon as you need to transform that work into a production solution it starts to feel quite gross.
Before Polars my solution was (and still largely remains) to do most of the relational data transformations in the data layer, and the use dicts, lists and numpy for all the additional downstream transformations. This made it much easier to break out of the "DS bubble" and incorporate solutions into main products.
"revolutionary"? It just copied and pasted the decades-old R (previous "S") dataframe into Python, including all the paradigms (with worse ergonomics since it's not baked into the language).
No other modern language will compete with R on ergonomics because of how it allows functions to read the context they’re called in, and S expressions are incredibly flexibly. The R manual is great.
To say pandas just copied it but worse is overly dismissive. The core of pandas has always been indexing/reindexing, split-apply-combine, and slicing views.
It’s a different approach than R’s data tables or frames.
> allows functions to read the context they’re called in
Can you show an example? Seems interesting considering that code knowing about external context is not generally a good pattern when it comes to maintainability (security, readability).
I’ve lived through some horrific 10M line coldfusion codebases that embraced this paradigm to death - they were a whole other extreme where you could _write_ variables in the scope of where you were called from!
I can write code like:
penguin_sizes <- select(penguins, weight, height)
Here, weight and height are columns inside the dataframe. But I can refer to them as if they were objects in the environment (I., e without quotes) because the select function looks for them inside the penguins dataframe (it's first argument)
This is a very simple example but it's used extensively in some R paradigms
Dataframes first appeared in S-PLUS in 1991-1992. Then R copied S, and from 1995-1996-1997 onwards R started to grow in popularity in statistics. As free and open source software, R started to take over the market among statisticians and other people who were using other statistical software, mainly SAS, SPSS and Stata.
Given that S and R existed, why were they mostly not picked up by data analysts and programmers in 1995-2008, and only Python and Pandas made dataframes popular from 2008 onwards?
Exactly. I was programming in R in 2004 and Pandas didnt exist. I remember trying Pandas once and it felt unergonomic for fata analysis and it lacked the vast library of statistical analysis library.
With all great observations made, the quote still stands.
"If I have seen further, it is by standing on the shoulders of giants" - Isaac Newton
When people say I feel the sense of community, this is exactly what it means in software philosophy: we do something, others learn from it, and make better ones. In no way is the inspiration’s origin below what it inspired.
Sounds too much like an advertisement.
Also we need to watch out when diving into Polars . Polars is VC backed Opensource project with cloud offering , which may become an opencore project - we know how those goes.
They get forked and stay open source? At least this is what happens to all the popular ones. You can't really un-open-source a project if users want to keep it open-source.
To be fair, as someone who's fought pandas for many years I agree with basically everything they said. The API design for Polars is much, much more intuitive. It's a base R to dplyr level change.
While polars is better if you work with predefined data formats, pandas is imo still better as a general purpose table container.
I work with chemical datasets and this always involves converting SMILES string to Rdkit Molecule objects. Polars cannot do this as simply as calling .map on pandas.
Pandas is also much better to do EDA. So calling it worse in every instance is not true. If you are doing pure data manipulation then go ahead with polars
Map is one operation pandas does nicely that most other “wrap a fast language” dataframe tools do poorly.
When it feels like you’re writing some external udf thats executed in another environment, it does not feel as nice as throwing in a lambda, even if the lambda is not ideal.
Personally I find it extremely rare that I need to do this given Polars expressions are so comprehensive, including when.then.otherwise when all else fails.
That one has a bit more friction than pandas because the return schema requirement -- pandas let's you get away with this bad practice.
It also does batches when you declare scalar outputs, but you can't control the batch size, which usually isn't an issue, but I've run into situations where it is.
I think that's a fair opinion, but I'd argue against it being poorly thought out - pandas HAS to stick with older api decisions (dating back to before data science was a mature enough field, and it has pandas to thank for much of it) for backwards compatibility.
Well this is like saying Python must maintain backwards compatibility with Python 2 primitives for all time. It’s simply not true. It’s not easy to deprecate an old API, but it’s doable and there are playbooks for it. Pandas is good, I’ve used it extensively, but agree it’s not fit for production use. They could catch up to the state of the art, but that requires them being very opinionated and willing to make some unpopular decisions for the greater good.
Why though? polars sounds like the rewrite! It’s okay to cycle into a new library. Let pandas do its thing and polars slowly take over as new projects overtake. There is nothing wrong with this and it happens all the time.
Like jquery, which hasn’t fundamentally changed since I was a wee lad doing web dev. They didn’t make major changes despite their approach to web dev being replaced by newer concepts found on angular, backbone, mustache, and eventually react. And that is a good thing.
What I personally don’t want is something like angular that basically radically changed between 1.0 and 2.0. Might as well just call 2.0 something new.
Note: I’ve never heard of polars until this comment thread. Can’t wait to try it out.
I think that's a sane take. Indeed, I think most data analysts find it much easier to use pandas over polars when playing with data (mainly the bracket syntax is faster and mostly sensible)
I would agree if not for the fact that polars is not compatible with Python multiprocessing when using the default fork method, the following script hangs forever (the pandas equivalent runs):
import polars as pl
from concurrent.futures import ProcessPoolExecutor
pl.DataFrame({"a": [1,2,3], "b": [4,5,6]}).write_parquet("test.parquet")
def read_parquet():
x = pl.read_parquet("test.parquet")
print(x.shape)
with ProcessPoolExecutor() as executor:
futures = [executor.submit(read_parquet) for _ in range(100)]
r = [f.result() for f in futures]
Using thread pool or "spawn" start method works but it makes polars a pain to use inside e.g. PyTorch dataloader
However, this is not a Polars issue. Using "fork" can leave ANY MUTEX in the system process invalid (a multi-threaded query engine has plenty of mutexes). It is highly unsafe and has the assumption that none of you libraries in your process hold a lock at that time. That's an assumption that's not PyTorch dataloaders to make.
Default to "spawn" is definitely the right thing, it avoids many footguns
That said for PyTorch DataLoader specifically, switching from fork to spawn removes copy-on-write, which can significantly increase startup time and more importantly memory usage. It often requires non-trivial refactors, many training codebase aren't designed for this and will simply OOM. So in practice for this use case, I've found it more practical to just use pandas rather than doing a full refactor
I can't believe parallel processing is still this big of a dumpster fire in python 20 years after multi-core became the rule rather than the exception.
Do they really still not have a good mechanism to toss a flag on a for loop to capture embarrassing parallelism easily?
Funny enough, I actually just (2 weeks ago) added support for streaming from Pyspark to Polars/DuckDB/etc through Arrow PyCapsule. By streaming, I mean actually streaming, not collecting all data at once. It won't be released probably until May/June but it's there: https://github.com/apache/spark/commit/ecf179c3485ba8bac72af...
As someone who just encountered Pandas for the first time as part of an Intro to Data Visualization course a few weeks ago, I am now very curious about Polars.
The professor doesn't actually care which tool we use as long as we produce nice graphs, so this is as good a time as any to experiment.
I didn't know about polars, and I can see that they also have a library for R. However, in R, they have a fiercer competition. I wonder how it compares to tidyverse, which is the stablished data analysis library.
> The design of Pandas is inferior in every way to Polars
I used Pandas a lot with Jupyter notebooks. I don't have any experience with Polars. Is it also possible to work with Polars dataframes in Jupyter notebooks?
A dataframe API allows you to write code in Python, with native syntax highlighting and your LSP can complete it, in one analysis file. Inlined SQL is not as nice, and has weird ergonomics.
UDFs in most dataframe libraries tend to feel better than writing udfs for a sql engine as well.
Polars specifically has lazy mode which enables a query optimizer, so you get predicate push down and all the goodies if SQL, with extra control/primitives (sane pivoting, group_by_dynamic, etc)
I do use ibis on top of duckdb sometimes, but the UDF situation persists and the way they organize their docs is very difficult to use.
because method chaining in Polars is much more composable and ergonomic than SQL once the pipeline gets complex which makes it superior in an exploratory "data wrangling" environment.
Polars took a lot of ideas from Pandas and made them better - calling it "inferior in every way" is all sorts of disrespectful :P
Unfortunately, there are a lot of third party libraries that work with Pandas that do not work with Polars, so the switch, even for new projects, should be done with that in mind.
Yes, ChatGPT 5.2 Pro absolutely still does this. Just ask it for a pivot table using Polars and it will probably spit out code with Pandas arguments that doesn’t work.
Numerical integration methods suffer from the “curse of dimensionality”: they require exponentially more points in higher dimensions. Monte Carlo integration methods have an error that is independent of dimension, so they scale much better.
The true value of gold is quite stable over time (since new gold is mined at a slow rate). Fiat currencies are constantly being debased. Hard assets fluctuate up and down relative to gold.
I agree that reverse auctions would be a simple solution the current scalper problem. If demand outweighs supply, scalpers drive up the prices toward their economic equilibrium anyway, making ticket prices just as "unfair" to the poor, but with additional problems of trust...
It’s probably not worth incurring the pain of a compatibility-breaking Pandas upgrade. Switch to Polars instead for new projects and you won’t look back.
reply