Leveraging esProc for a Heterogeneous Data Source Computation Issue

Heterogeneous Data Source Computation Issue in Crystal Report Got Solved by esProc

Recently we resolved an issue with Crystal Report, related with heterogeneous data sourcecomputation. Due to its complexity, the computation cannot be done with the current functionality of Crystal Report. Thus esProc is used for cross database computation.

Project background: The customer has recently rolled out their KPI system, which required some modification on their original salary computation algorithm. In the past the salary list is mainly calculated from the basic salary of the employees, which is stored in the MSSQL database used by their Accounting System. The new salary list consists of two parts, basic salary and performance-based salary. The performance-based salary is calculated according to the performance score for each employee, which is stored in the Oracle database used by KPI system. Obviously, the new salary list has to be generated with cross database computation on two heterogeneous databases.

The actual algorithm for computing performance-based salary is quite complicated. Firstly, the algorithm for each position is different. Some are based on the level of the basic salary, while others are not. Some positions are purely based on performance, while others need to consider the performance and how many years the employee has worked for the company. Moreover, there are also positions that have performance score but no performance-based salary. Secondly, even for those positions based on the level of the basic salary, the algorithm might differ, as the salary level is different for each position. Within each level, the algorithm might also be different. Finally, the salary for all employees needs to be combined into one report.

For better understanding, we simplified the algorithm significantly, and ignored the tax implication. We limited the positions to 2: "normal" and "sales". Position "normal" has performance score but no performance-based salary. The pre-tax salary equals basic salary. For position "sales", the pre-tax salary is the sum of basic salary and performance-based salary. Among which, the performance-based salary is computed in this way:

For employees with basic salary below 2000: performance-based salary=basic salary*(performance sore /100)
For employees with basic salary between 2000 and 4000: performance-based salary=basic salary*(performance sore*0.9 /100)
For employees with basic salary above 4000: performance-based salary=basic salary*(performance sore*0.8 /100)

Thus we could see that to generate a complete salary list, we need to separate the employees in employee table in MSSQL into several groups (2 groups after the simplification). For each group we need to compute the pre-tax salary, and then combine them into one list. For two different positions, the computation is different.  For employees with position as "sales", the "performance" table in Oracle database needs to be joined, with pre-tax salary being computed according to respective levels. For employees with position "normal", no such joined is needed.

The difficulty in this report lies in: 1) table employee and performance belongs to two heterogeneous databases, which requires cross database computation. 2) the algorithm is too complicated, as simply join the two tables cannot do the job.

The ideal solution for cross database computation is to do this through reporting tool. If the reporting tool can process two heterogeneous data sources in one report, cross database computation can then be done on the "report level". However Crystal Report handles heterogeneous data sources in a very complicated way, and it is done with a high implement cost. Plus, reporting tool can only work with simple inner and outer join, not the kind of complicated computation, such as what-if judgments in a loop, and multiple result sets aggregation.

Since reporting tool cannot solve such issues, we can only turn to other way. Loading the data to a separate database with ETL is not a good choice, because the development for ETL is costly, and data synchronization, as well as real time updates need to be considered. With user-defined data source the problem can be simplified significantly. Well, esProc can be a very good self-defined data source for reporting tool.
See the codes below:

These codes are easy to understand.

A1A2: retrieving data from ORACLE and MSSQL databases.  A3: adding an empty column to table "employee", to store the future pre-tax salary.
A4A10: extract data for all employees with positions of "sales" and "normal". For future aggregation purpose, business name is more convenient. Thus we name these two sets as sales and normal respectively. Of course, we did not define extra variable for temporary computation result like A3, which, instead, is called in A4 by the name of the cell. The same is for A1, which is called A5.

A5-C9: computation the pre-tax salary for sales. Here A5 is join, which is done between the basic salary for sales and their respective performance-based salary. A6 to C9 is a loop, used to compute the pre-tax salary for each row of sales, based on the actual salary level for the employee. Three things need to be noted here: 1) the loop is indicated with indentation, with B7-C9 as the body of the loop. 2) The variable of the loop is in cell A6, after the "for" operator. In the loopA6 can be used to refer to current record. 3) The way that A6.empID.score is used how an object is referred to. This refers to the score field of the records joined with empID field of current record A6 (eg., record in performance), which is the performance score of the current employee. A11: replace the preTax value in normal with baseSalary. A12: combine the computation result sets for different positions. Of course, in reality the positions are not limited to two in algorithm. The algorithm for computing pre-tax salary of each position is also more complicated than the above example. A13: Select some fields from A12 for output. A14: output A13 by JDBC, so that JAVA code or reporting tools can call it directly through JDBC URL. We can see that this is also a way to unify heterogeneous data sources. However, the data source consolidation in Crystal Report is too simple to handle such process-based cross database computation.

Now the cross database computation issue with Crystal Report is conveniently resolved. Besides, Crystal Report can use esProc in the same way as MSSQL or Oracle databases, which is easy to do.
About esProc: http://www.raqsoft.com/product-esproc

More Stories By Jessica Qiu

Jessica Qiu is the editor of Raqsoft. She provides press releases for data computation and data analytics.

Latest Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...