If you compress the ram, you need to push data through the cpu, which would tank performance. The whole idea of ram is to have a bit of fast space to store stuff, so it can be used for things that need to run fast and also require a lot of data. To get the performance to be as good as it is, a lot of times the cpu is completely bypassed, to prevent bottlenecks. If you don't need it fast, just park it on the hard drive, there is plenty of space there. The memory manager takes care of that for you, with for example a technique called swapping.
These days there are lots of complicated technologies that achieve this goal, but it all started with DMA or Direct Memory Access back in the days.
Fun fact, in the 80s and 90s memory was so expensive and modern techniques like swapping were in their infancy it could sometimes be useful to compress data in memory. I used to use QEMM back in the days which had that feature, even though I didn't use that specific feature myself.
In principle someone could make a memory chip that was capable of on the fly compression. But that would cause wildly inconsistent memory timings, which isn't ideal and I don't think the costs would level out versus simply getting more memory.