In the first part of Slaughter Analytics, I showcased how I performed self-service ETL to pull in and model data from a livestock slaughter statistics webpage. In this post I shall focus on performing some simple analytics through visualizations.
Excel 2013 gives us two more add-ins for visualization, interactive analytics and storytelling: Power View and Power Map. The video below showcases the demo that I used to perform the analysis of the extracted and modeled data.
Slaughter Analytics – Visualization
Continue reading Slaughter Analytics – Part 2
I had proposed using proactive caching for a near real-time cube, and the idea was that when the ETL was done every five minutes, proactive caching should automatically kick in to process the cube.
Seemed simple. Configuration was simple. And it worked as expected on development and QA environments. But on UAT, proactive caching would simply not start. Everything was configured just as it was on dev and QA. Permissions were also perfect, but for some reason proactive caching would simply not kick in to automatically process the cube once the underlying table was updated. The ETL runs every 5 minutes and takes only a minute to update the underlying table, so I knew exactly when silence interval should start telling the analysis services to start processing – yet nothing happened.
Continue reading Proactive Caching: Automatic Processing would not start
For those who are interested, here are the two presentations that I presented at Dev Day 2014 and SLASSCOM TechTalks: Smart Data Engineering on the 17th and 26th of November, respectively.
Serve Yourself: Self-Service Business Intelligence
(Dev Day 2014)
Self-Service Business Intelligence
(SLASSCOM TechTalks: Smart Data Engineering. Lightning Talk)
So I got this wonderful opportunity to present at Dev Day 2014, here in Colombo, a couple weeks ago, and since my craze with Power BI was still at an all time high, I submitted to talk about self-service business intelligence. Despite the concept being around in the mainstream for at least a couple of years, it was decided that it was a fantastic topic to be presented on, and I was given the go ahead. A week later, I got another opportunity to present the same, albeit in a constricted version, as part of a series of lightning talks at the first ever SLASSCOM TechTalks session, titled “Smart Data Engineering”.
This post is a follow-up to these presentations, where I try to explain my demo that I performed on stage.
Continue reading Slaughter Analytics – Part 1