Vista runs out of physical memory when working with files > 1GB - memory management issue?

Discussion in 'Windows Vista Performance' started by Robert Janik, Mar 29, 2010.

  1. Robert Janik

    Robert Janik Guest

    I regularly see issue where Vista runs out of physical memory and starts
    swapping memory into the page file and as hard drive operations dramatically
    increase explorer stops responding or is responding very slowly. One example
    is a situation where I have opened Visual Studio , while editing 3D scene in
    trueSpace. After opening and closing a 3D scene several times I experienced
    very slow response from OS. Closing trueSpace and starting again helps to
    solve the problem for a while. I didn't quite understand what was going on
    until recently when I was burning ISO images stored on hard drive to a dvd
    and started investigating deeper.

    My configuration:
    Machine: Asus R1E (tablet)
    Processor: Intel Core 2 Duo 2.4GHz
    RAM: 4GB
    OS: Vista Ultimate 64-bit

    I was burning images with size of around 4 GB using Nero Burning. When I
    start the application for the first time and start burning an image,
    physical memory usage goes up in about 50 - 100 MB increments until reaching
    4 GB, then OS starts using page file extensively and hard disk operations
    increase. Burning finishes and physical memory usage goes back down.
    Verification of data written on DVD starts and memory usage goes up again
    the same way. When verification finishes I select another ISO image and
    continue burning without closing the application. However from now on things
    get worse. When I start burning another image memory usage goes up again,
    hard drive operations increase to the point where I cannot use the machine.
    Explorer stops responding. I press Ctrl-Alt-Del and it takes about 15
    minutes before I see black screen as OS is trying to open secure logon
    screen, then after another 5 minutes I see an error "Failure to open secure
    logon screen". I leave the machine running and it is able to finish burning.
    Os starts responding, but very slowly, I still cannot open logon screen, but
    I'm able to start task manager by right clicking on taskbar (very slow
    again) and selecting menu item. Physical memory usage is high and doesn't go
    down anymore. When verification of burned content starts the situation
    repeats with high physical memory usage, high hard drive usage and slow
    explorer response. At this point we are not burning data, just reading data
    from dvd and comparing to data on hard drive. When the process finishes OS
    starts responding. Free physical memory is 0 and memory used 4 GB. When I
    add up virtual memory allocated by all processes, it adds up to 1.5 GB If I
    add kernel memory usage I see in task manager it is all together no more
    than 2 GB. Where the rest of physical memory has gone? The application's
    Working set is only around 150 MB as reported by task manager and 100 MB
    Commit size. I'm closing the application.
    I rebooted the machine and tried again. There are no other applications
    running, nothing on the background, no antivirus, only basic icons in tray
    (sound, network etc). I started burning images again, first attempt goes
    with physical memory of only around 1 GB and holding steady until it
    finished, but on the second round everything repeats again, memory usage
    going up OS slowing down. I tried a different application, but I see exactly
    the same pattern.
    I cannot believe that exactly the same pattern with memory usage and slow OS
    response is caused by memory leaks. If I consider different scenarios
    burning large files, working with 3D modeling software and visual studio and
    other situations I cannot believe that all these applications have memory
    leaks or suffer from bugs that would create this issue.
    The issue seems to be related to files > 1 GB, but it is possible that I
    would just need to work with 1 GB files longer without closing applications
    to reproduce the problem (as it happened with trueSpace where files
    certainly didn't even reach 1 GB level, but I always worked with it the
    whole day).

    When I start an application for the first time and start working with large
    files, it needs to load content into a buffer which means it allocates space
    on the heap. The application processes the buffer and releases it, allocates
    space for another buffer and so on. As new allocations are made the the
    request goes to backend allocator and since new buffer can be of bigger
    size, a new allocation is made. As the application runs new allocations may
    cause splitting large blocks and when a new request is made a new large
    block is allocated.
    This goes on until the whole segment is taken and a new segment of memory
    allocated. As memory is getting more and more fragmented and new allocations
    made, the system eventually runs out of physical memory. Even though blocks
    of memory are de-committed, lots of memory stays reserved and this is the
    reason that task manager gives information about 2 GB committed out of 4 GB
    for all processes and for the application causing the issue 100 - 150 MB
    committed size.

    I know that Vista uses by default low fragmentation frontend allocator. Is
    it possible that these applications switch explicitly to look aside list?
    Could this cause the problem?
    Is there a way to configure memory management and optimize it to solve the
    Robert Janik, Mar 29, 2010
    1. Advertisements

  2. Robert Janik

    Badger Guest

    When was the last time you did a Disk Cleanup to get rid of temporary
    internet files and cookies?
    Badger, Apr 9, 2010
    1. Advertisements

  3. Robert Janik

    Robert Janik Guest

    I cleanup temporary internet files and cookies every day - every time I'm
    about to close IE. I do the disk clean up very often.

    I have also seen this issue on Windows Server. On one of my work assignments
    was investigating exactly the same pattern with physical memory usage going
    up after which the server started responding very slowly and finally the
    remote session was disconnected. Backend servers were processing 10 GB files
    and services needed more time than what was theoretical time span and they
    were sometimes failing and nobody could explain why. This issue resurfaced
    in the form of database replication issues, network timing issue and other.
    When running on test server I had to request server reboot. Once when test
    server stopped responding while I was monitoring this memory usage issue I
    left it running over the weekend and decided to request reboot on Monday
    morning, however the system recovered on it's own as the service completed
    or failed processing the data and exited. Memory was released, server
    recovered and was responsive again.

    I think this issue might be caused by cache management and heap
    fragmentation. Heap fragmentation contributes to higher memory usage, but
    the primary source of the problem seems to be cache management. I know exact
    steps to reproduce this issue, so I can just avoid it. Now I can predict
    when the system is about to stop responding and take corrective action -
    stop and start again the application which is using 1 GB or bigger file.
    Robert Janik, Apr 21, 2010
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.