PowerPivot Stats

Here’s what a recent PowerPivot process looked like

  • 14,512,074 Rows
  • 7 Columns
  • 14:09 to load data from SQL Server at 30M
  • 5:36 to save file
  • 129MB file size
  • 0:14 recalc time
  • 1,559,700KB used by the EXCEL.EXE process

On this client machine:

  • Windows 8.1 Pro 64bit
  • Excel 2010 64bit
  • 36GB RAM
  • 2.4Ghz Xeon processor

Leave your story in the comments.

10 thoughts on “PowerPivot Stats

  1. My story is my work computer is 32-bit, so PowerPivot poops out at about the same time as a regular pivot. I’ve tried to feed it a 1 million row CSV just to test, and got to about 750k.

    How many rows were the results of whatever you did with all that data?

  2. Volume: No of Characters unknown
    Transfer rate: Characters per Load time and Save time
    Compression rate: Characters per Disk space and RAM space

    at any rate, long waiting time

  3. 16 stores (rows) x 12 months (columns) was the resultant pivot. We needed to see trends by stores over the past year to see what was causing the increase in the total.

    When we added one more pivot item, the memory usage went from 1.5GB to ~3GB. It must be building a cube on the fly in the background?

    Just for fun, and because tomorrow’s a holiday, we might set up an analysis server, create a cube on the server, and use that cube as the source for PowerPivot. Just to see how offloading all the processing to the server affects things.

  4. “we might set up an analysis server, create a cube on the server, and use that cube as the source for PowerPivot”

    I guess you meant to say use the cube as the source for Pivot Table, because if you have Analysis Server you don’t need PowerPivot at all.

    Regards

  5. @Dick can you share with us:
    1) Data schema
    2) SQL queries that has been used for analysis

  6. Greetings;

    I’m the IT Manager at the same company Dick works for. It was my data and stats he posted. It was a query from our payment environment. One of the reports we ran showed a huge increase, more than what I felt was reasonable.

    The data used is in a single table, selecting 7 columns. Both CardType and Processor have a limited set of unique values.

    Terminal char(7),
    CardType VARCHAR(30),
    Processor VARCHAR(30),
    Quantity NUMERIC(15,3),
    Dollars NUMERIC(15,2),
    BusinessDate datetime,
    PCI BIT

    The goal was to chart the transaction volumes so I could find when and how it changed. I suspected there was a date when our data changed. By using the pivot table I could easily use slicers and dissect the data until I figured out what was misreporting, and when it started.
    I found, for one processor, we started doubling the volume; this coincided with a system change.

    Regards

  7. John can clarify this if I got anything wrong. He did the same process using Analysis Services and concluded (with some caveats) that SSAS was the better way to go. Obviously the file size went way down and changes to the pivot table were instantaneous. But we knew that was going to happen. It was setup time that would be the factor.

    The SSAS took about 10 minutes for John to set up, ~5 minutes less than it took to load the data into PP. With only 7 columns, he cubed everything because why not.

    The aforementioned caveats is that John is really good at this stuff, had a box sitting there on the domain ready to go, and has setup so many servers that he could do it in his sleep. If you have to google something to get yours set up, you’re probably not as fast. But even if it takes 20 minutes to get it setup, the savings in calc time when you want to change something is worth it.

    For this data set and our circumstance, SSAS feeding a pivot table was the way to go over PowerPivot. But I’m glad we did it both ways. (and by “we” I mean I had nothing of value to add to process).


Posting code? Use <pre> tags for VBA and <code> tags for inline.

Leave a Reply

Your email address will not be published.