Thursday, May 25, 2023

Using Spatial Studio to Extend / Enhance Dimensions for OAC

Oracle Spatial Studio is a great tool that can be leveraged to extend or enhance location based data you may already be working with in your analytic projects.

Say you would like to define sales regions that are based on counties within the state and you would like to view those regions on a map.

Here is a publicly available map of Missouri, the task is to create 6 sales regions so that sale can be tracked by region and then be able to drill into region to see the sales by county for that region.

I exported the shapefile and imported it into Spatial Studio via Create Dataset.

I provide a meaningful name for the Dataset as well as the name of the table in the database.

 
Next, I need to designate a column in the dataset to be the Key. I picked the CountyFIPS as it is unique to the County name.
 Now i can create a project from the new dataset.

Creating custom regions from existing boundaries is actually quite easy.  As each County has its own boundary I could pick the Counties I want in my new Region by holding shift key and clicking with mouse or use the lasso tool.

  Now click the Create Custom Region from the Actions Toolbar.

In the dialog Select Create New, Provide the new table name and dataset name, column name and key value.

The newly created dataset is part of the project and can now be added to the canvas by dragging it into the existing map 

Now just need to make 5 more.

This time I will select append to existing and select the previously created dataset and give new attribute name.

Once all 6 regions are created I now have new map layer.

To link regions to counties I now need to create a cross reference dataset. This will involve using the Analytics feature of Spatial Studio. Click Create Analysis.

From the menu of Analysis I am selecting Return Shapes that having spatial relationship to another.

I provide a name select the county boundaries as the layer to be filter, select regions as the layer to be used as the filter and then set the mask to be Inside and Covered by
Click the Advanced and check the include the key of the secondary dataset in the result

the result can be viewed as a table

Now I can save the analysis back to my database as a new table in my ADW instance.

And when I look at it in SQLDeveloper I can see the County and Region information.

For the last bit in Spatial Studio I need to export the Custom Regions as GeoJSON map layer and then import it into OAC Custom Map Layers.


Change File Format to GeoJSON and check the REGION_NAME to be included in the file


In OAC Console Maps interface import the GeoJSON file and check REGION_NAME


I now have a map layer of Regions and Counties.


Now to create a dataset with the tables from my ADW.



Next The Workbook to analyze Sales by Region


Since I have the County to Region table I can drill to County level data from Region by right clicking on a region and selecting Drill to Attribute/Hierarchy


In this case I want to drill to COUNTYNAME


A filter is set to the Region I right clicked on and the Map now shows those counties within the region selected.


I hope this example shows how easy it is to use Spatial Studio to help extend dimensions and create hierarchical geographic data.

Thursday, April 27, 2023

Near Parallel Data Loading using EPM Groovy REST API

Data Loading using EPMAutomate for a range of periods is always treated in sequential manner.  This in part due to how Data Management processes data between the staging and mapping tables under the covers, just look at the process log.  The Data Exchange framework offers some efficiencies and ways to speed up data loading.  Using EPMAutomate still processes 1 period at a time even Quick Mode.

In recent project when loading Forecast for 1 year it took 20 min using Quick Mode.

And we needed to load 3 years, and the forecast needed to be reloaded several times a day during the forecast cycle, so waiting nearly an hour was not an option.

As I pointed out in my blog post EPM Groovy RestAPI vs Groovy EPMAutomate for Data Loading REST APIs calling EPMAutomate uses POST and requires a GET to retrieve status.  

This presents an opportunity to think out of the box and after consultation with Joe Aultman the Evangelist of Groovy we decided that instead of sending a range in one EPMAutomate we would loop through the 12 periods and send POST to runIntegration for a period. And looking at the process details we saw 12 active integrations.

First (1) we need to create a List of the POST calls then iterate (2) through the list to GET the status of the runIntegration and once all 12 integrations finished the code proceeded to next step.

 

Using this approach we were able to cut the data load time down from 20 minutes per forecast year to less than 6 minutes per forecast year.

 




Friday, April 21, 2023

EPM Groovy RestAPI vs Groovy EPMAutomate for Data Loading

Lately I have been doing a lot more EPM Data Integration work.  I have been trying to move to more server side automation using Groovy in Business Rules to replace on-premises EPMAutomate scripts for most of my data loads.  

Here is link to documentation.

Before this was available, there was and still is the ability to call REST APIs to invoke actions within EPM.

Here is link to documentation.

One of the reasons I like using server side automation, is that it puts the responsibility of loading data back in the hands of the Business user. 

From a developer and development perspective Groovy and EPMAutomate I believe is easier and has less steep learning curve over Groovy and REST API. Additionally you can do same action with a lot less code using Groovy and EPMAutomate over REST API.

Here is example of running data integration with REST API.


One thing about REST API. There are 2 main commands used. POST sends action to request to remote server.  GET gets status or response of the POST.  So after you send POST you need another bit of code to check if the requested action of the POST has finished and if it was successful or was an error.  So you need a block of code like the following:

 


Now with EPMAutomate, it is different.  The framework behind the scenes has a wait, so all that is needed is to collect the status of the execution.

 

 

And by using a function logging is cleaner


Each approach has their merits and use cases, however with the introduction Groovy rules being able to run EPMAutomate commands the development time and lines of code are greatly reduced.