Allocation Unit Size for Partitions Dedicated to Pagefile

Discussion in 'Windows Server' started by orbbllp, Jun 21, 2007.

  1. orbbllp

    orbbllp Guest

    I've read a few articles on sizing pagefile files correctly that don't seem
    to address the optimal block size in the underlying file system.

    Perhaps I've missed it, but if one dedicates a partition on separate logical
    or physical disk(s), what is the optimal allocation unit size (a.ka. block
    size) to use when formatting the partition? For paritions holding Exchange
    2000/2003 databases, I've read it should be 4KB. For SQL, it should be 64KB.
    But what about partitions holding the pagefile? What is the page size in
    which Windows writes to the pagefile?

    TIA for any input.
     
    orbbllp, Jun 21, 2007
    #1
    1. Advertisements

  2. I asked that very question in a training class of a couple of MS programming
    employees. 64kb was the answer.
     
    Joshua Bolton, Jun 21, 2007
    #2
    1. Advertisements

  3. orbbllp

    John John Guest

    Hmmm. Strange. Non-Alpha's have memory pages of 4096 bytes (4kb), I
    wonder what the logic is to have the pagefile in 64kb clusters, seems to
    me the 4kb memory pages would fit nicely and page neatly in 4kb clusters...

    John
     
    John John, Jun 21, 2007
    #3
  4. orbbllp

    Steve Gould Guest

    I think I would agree with John. I googled it and every hit I found
    recommended a 4kb cluster size. Note that not a single hit was a Microsoft
    page...
     
    Steve Gould, Jun 21, 2007
    #4
  5. the question pertains to what the optimal cluster size would be for pagefile
    reads/writes. Since the pagefile writes in 64k chunks it is a single
    write/read operation at 64k which performs better than 8 read/writes to
    give/get the same information. This is a technique used in database
    optimizations.

    The reason this information is so hard to come by is due to the haze of
    information concerning MS pagefile. MS, in its efforts to be something for
    everyone has hodge-podged their documents. This is then added to by well
    meaning but misconcluded individuals who post this misinfo on the web.

    For example, one MS document says you can put your pagefile on a seperate
    partition yet does not explain that if that partition is on the same disk as
    the OS you have accomplished nothing in optimization. You could accomplish
    exactly the same thing with a static pagefile on the same OS partition.

    There is a lot of confusion on the net concerning pagefile ops, from my
    perspective, when you see folks make statements like "since memory operations
    are done in 4kb than having 4kb for disk clusters is optimal" as one well
    known web site states. Say what? How do you get to that conclusion? Does
    the writer really think because how memory operations are handled in RAM that
    that directly relates to disk clusters? That a 4kb operation is going to be
    written to a 4kb cluster? Can you begin to imagine how slow that would be or
    what the impact on paging to disk operations would be effected if that were
    the case? They are in effect saying page file writes are serial. Doesn't
    work like that.

    How about the MS document that recommends for crash dumps that you configure
    your pagefile accordingly? This might make sense in a single disk system but
    what if you want to optimize your pagefile by placing it on a different disk
    or disk subsystem? The recommendation is now to have TWO pagefiles. One on
    the OS disk to dumps and one on the second drive. I ask the question: when
    was the last time a disk dump helped you solve a server issue? My answer has
    always been never. So why configure two pagefiles when the OS will just be
    constantly "checking" the second one [and wasting cpu/disk cycles] and not
    using it? Remember now that there is a algorithm in the pagefile ops that
    determines which it thinks is the faster and that is the only one it uses.
    You can confirm this yourself by using performance monitor on the two
    pagefiles.
     
    Joshua Bolton, Jun 21, 2007
    #5
  6. orbbllp

    John John Guest

    I sort of thought that the reason might be for faster disk read/writes,
    but I thought that the memory pages are (were) swapped out to individual
    clusters in 4kb's anyway, so how would that make it faster? If we are
    to believe or accept that faster disk read/writes would be the reason
    for using 64kb clusters then we have to believe that multiple memory
    pages would be written to the same disk cluster in lots of up to 16
    pages for this to be true. I find that hard to grasp, can it even be
    done? I have found the following information on the TechNet site:

     
    John John, Jun 21, 2007
    #6
  7. orbbllp

    Steve Gould Guest

    Some very good points Joshua. Would you happen to have references you could
    point to at Msft? I would love to read deeper into it. At my last company we
    always had 1 pagefile placed on a different spindle from the system drive
    and didn't use crash dumps (I agree that they are worthless unless you are a
    developer). At my present company (much more restrictive) we have a small
    pagefile on the C drive and use the small crashdump option then spread the
    pagefiles across other drives. Since one of our required system setups is to
    only use a single Raid 5 (unless we are using external storage also) and
    split it into volumes for each drive letter required (usually system, apps,
    and data) I don't see the point in this.

    I am curious what monitoring the pagefile will show me. I'm going to pick a
    particularly busy server tomorrow and monitor it.

    Steve


     
    Steve Gould, Jun 21, 2007
    #7
  8. orbbllp

    Steve Gould Guest

    I set up monitoring on a heavily utilized database server. I didn't get the
    results I expected. This is an Oracle server with 8GB of RAM and 12GB of
    swap files spread across 3 drives. Maybe this isn't the best server to
    monitor. The three pagefiles are utilized about .3%, pages/sec about 26,
    page faults/sec about 1300. The three swap files are all active with the
    pagefile on C getting .3% and the other two equal at .2%. Following the
    graph I see all three have dips and peaks that roughly mimic each other.

    So (realizing that this is only one server, not several) I see all three
    swapfiles being used equally, not just one...

    Steve


     
    Steve Gould, Jun 22, 2007
    #8
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.