(Originally posted 2007-03-16.)
Twice the “128GB” question has come up in newsgroups (the latest in DB2-L). This sort of thing freaks people out – so let me tell you what I know of the matter…
- The CPU cost of moving data between the two address spaces (and I’m told there’s some reformatting, which adds to the cost).
- In extreme cases – large networks of heavy threads – chewing up of scarce ECSA and the DIST private area. (Thread “anchors” reside in the DIST address space and can become large, especially with the new, larger, communications buffer capability.)
So, in DB2 9, the interface between DDF and DBM1 was radically redesigned: Now the communication is done using a z/0S 64-bit Large Memory Object. This is in Virtual Storage above the 31-bit bar. When the DBM1 address space starts up this Large Memory Object is allocated with a 128GB size. Communications take place between DDF and DBM1 using this area: But we’re not talking about data moves and reformatting anymore. So this should cut the CPU. (And it also removes the constraints I mentioned above.)
There are two key points here:
- This is Virtual and not necessarily Real memory.
- The 128GB object is not populated at creation time. So in all case (at least for now) 🙂 the Virtual usage is going to be an order of magnitude or two less than 128GB.
Virtual does drive Real usage, of course. But in this case the increase in Real memory requirement (and there’s bound to be some) is nothing like 128GB.
I should also point out there will be some Virtual storage savings – below the 2GB bar. That will be welcome to a number of customers I know. (And I may well blog about this soon.) But one caution:
I do not expect the need to manage thread numbers and thread footprints to go away, perhaps ever…
DB2 9 does not move the whole thread footprint above the bar. And threads still cost Real memory – though it would take many thousands to have an impact on memory capacity planning. You do do memory capacity planning, don’t you?