Is there an easy way to review the historical run times for reporting jobs? If so, where are those run-times stored? We would like to integrate them into our metric tracking tools.
Hi! Thanks for your question. To view the run time for previously run jobs, you can look at the history tab in the Activity screen.
There, you can see the elapsed time of you job. It is worth mentioning that you can go to your configuration settings to change how long you keep this history for. So if you would like to keep this history around for a long time in order to report off it, I would recommend changing this setting.
Another option you have is the Audit Log, also located in the administration window. From the Audit log, you can filter category to Jobs and exclusively get details on what jobs were run and when.
I hope this helps!
-TJ Shannon
Are these stored somewhere we could query or read them programmatically? Even just a log file would be okay.
CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
|
- |
| TJS
January 17 |
- | - |
Hi! Thanks for your question. To view the run time for previously run jobs, you can look at the history tab in the Activity screen.
There, you can see the elapsed time of you job. It is worth mentioning that you can go to your configuration settings to change how long you keep this history for. So if you would like to keep this history around for a long time in order to report off it, I would recommend changing this setting.
Another option you have is the Audit Log, also located in the administration window. From the Audit log, you can filter category to Jobs and exclusively get details on what jobs were run and when.
I hope this helps!
-TJ Shannon
@rdclapp / Ryan,
This is a good idea, while there is some information already available in some internal files, an addition to the existing Audit log would provide the information in a reliable and parsable way. Currently, the Audit log writes an entry when the job is started, which clearly wouldn’t have run times.
The existing information is located in the {repository dir}/segments/{segment dir}/execution folder. The .json files in this folder provide high-level information about job history. Each .json file represents a job run and contains the job name, start time, end time, and elapsed time.
Be sure to think through the complexity of the jobs you are measuring so that changes in run time can be explained properly. The elapsed time for a job includes all of the output files that are created and distributed. So, if a job runs on Monday and produces 20 reports but then produces 24 reports next Monday because the parameters used are dynamic (MDX/SQL queries), then we would expect to see longer run times.
We do have information stored in the .db files for the run time of each report output that is created. However, this may be too much detail to expose. I’m curious about your thoughts on that.
Thank you,
-Andy
Thanks. We will look at using something like a cloudwatch agent to capture the metrics and make sense of them. I’ll be sure to share anything that we find interesting.
Is the segments directory something that is pretty reliable? Or do they change often? I see quite a few segments so I don’t want to point a logger at one and then have it change.
Yes, the folders with IDs for names in the segments folder represent each workspace created in ReportWORQ. Those folders won’t change unless you delete or add workspaces. However, before you parse the JSON files in the execution folder, it may be better for us to support job execution information in a parsable and supported audit log. I’ll lay out some requirements to start.
Here are some things to consider regarding job execution and apples-to-apples comparisons on timing.
- “Running a job” can be a single job, a list of jobs, or a folder of jobs. The history name will be the source job name, or list of job names. So, if you filter by the history name you’ll see the grouping based on this naming convention.
- When running a job, you can pass parameter overrides to run the job for different filters. We’ll want to capture this information in our log. Parameter overrides might be simple values like Department=Sales or more complex TM1 subset/MDX Queries.
- A job may produce multiple iterations for bursting, we call those Runtime Jobs. You have the option of running all of the Runtime Jobs or just a some of them. Also, since the parameter to produce runtime jobs can be a query (e.g. Subet or MDX Query, etc.) then the number of runtime jobs may be different from run to run.
- The timing for the overall history item will include all of the runtime jobs. Runtime jobs run in batches concurrently.
Keeping those items above in mind, which may vary from run to run, here are some things we can export into an audit log entry.
The Job Execution Audit log would be a daily rolling JSON file with an entry for each Run.
- ExecutionID: Unique ID to this job run, correlates back to ReportWORQ history
- History Name: The source job name, or Job Names if multiple selected. Correlates back to ReportWORQ history
- Start Time:
- End Time
- Elapsed Time: Overall timing of all Runtime Jobs in seconds
- Parameter Overrides: A list of parameter override
- Parameter Name
- Parameter Value
- Count of Runtime Jobs: This will help to compare similar-sized job runs
- Runtime Jobs: A list of information for each runtime job
- Runtime Job Name
- Output File Name
- Start Time
- End Time
- Elapsed Time: in seconds
- Parameters: A list of parameter objects
- Parameter Name
- Parameter Value
You can use the root level items in each history entry for high-level auditing and if needed dig into the more complex properties (e.g. Runtime Jobs) for a more detailed analysis.
How does this specification look to you?
-Andy
The one thing that I would add is Status, Success vs Failure vs Partial Failure. This will power our service dashboards so that on-call engineers can look at pure number of jobs, success vs failure rates, elapsed time over time to see if we need to look at scaling our services. For example, a gradual uptick in elapsed time could tell is that more users are using the tool, reports are getting larger, or our model is getting slower. All of which we can figure out with secondary deepdive. I think this spec gives us what we need as far as metrics to alarm on.
Yes, Status will be included, that was an oversight. Our May release is in testing now, so we’ll look to include this in our June or July releases.
If anyone else has comments on this new feature, please build on this thread.
Thank You,
-Andy
Another thought is there a way to include something about report volume, like number of values?
Is this on track for the July Release, don’t see it in the June one?
Hey @rdclapp,
Yes, this is on track for the July release. We’ll also have it available sometime in June as part of a beta release, if you’d like to test it out early.
-Justin
Reviewing Run Times of Files in a job that has TM1 PAfE Formula review. How to see how long it takes for each file in a job?
If a job is run with Trace and Verbose Logging you are able to review the ElapsedMilliseconds for each cell for each workbook and sheet with the file named ‘Benchmark.xlsm’
We have our most commonly run job which is made up of 30 separate .xlsm files. Painfully we were able to download all 30 benchmark files and combine all the tables into one for anlaysis. Our findings were very surprising and resulted in changes made to our reports to enhance run time.
DBRA formulas had the largest ElapsedSeconds at 25.7s of the total 50.9s or 50.4%, second most time consumed was a Reportworq function called RWSuppress at 5.4s of the total 50.9s or 10.6%.
Testing Time :
I created a new job with 1 file that had DBRA retrieving Account Description and Weight. I copied that file and replaced DBRA with DBRW looking at }ElementAttribute_Cube. Then compared the Benchmark.xlsm files. Two tests were conducted :
-
Run both files, 1st file NO DBRA, 2nd file as is
-
Run both files, 1st has DBRA, 2nd file no DBRA
Very interesting to see the total reduction time for all formulas decrease especially RWSuppress. If there was an easier way to have a singe benchmark.xlsm with the contents of all seperate files, further analysis could be done on our other jobs.
Interesting finding to see that a DBRA take significantly longer time then a DBRW.
Where to get the Benchmark.xlsm file from a job with Trace/Verbose Logging



