July, 18
Zoom
for large and heterogeneous sources of data
12:00 pm (IST)
Databricks Consultant, Ciklum
Why is this topic hot?
With the exponential growth of AI-based solutions in the last few years, there has been a significant increase in focus on core data engineering capabilities. Transforming from robust and structured DataWarehouse solutions to unstructured data dump in the data lake to eventually a hybrid of delta lakes, organisations have been investing heavily in managing, processing and transforming large raw volumes of data to make it ready for Machine Learning and AI Learning.
In this session, we will provide an overview of how to leverage Azure Databricks to manage large and heterogeneous data sources.
We will talk about the following:
-
Introduction to Big Data, Spark and Databricks
-
Walkthrough of medallion architecture
-
Evolution from Datawarehouse to Delta Lakes
-
Projects in Azure Databricks
-
Managing Data Security and Access - unity catalogue
Karthikeyan Sivasubramanian,
Databricks Consultant at Ciklum
-
IT professional with over 2 decades of experience in software development, data and AI/ML solutions.
-
Worked across numerous business domains such as Insurance, Banking, Hospitality, Manufacturing and Transportation
-
Visiting faculty at Great Lakes Institute of Management (Mahabalipuram, India), where he delivers corporate learning programs in the area of data analytics, machine learning and AI for large organisations
We are
25+
20+
offices globally
clients reached IPO stage
4000+
seasoned like-minded experts globally
KEY
client domains: healthcare, fintech, travel, sportswear, entertainment, security
Leveraging Azure Databricks
Since 2002, we’ve engineered technology that redefines industries and shapes the way we live
For whom is this event?
Azure / Databricks professionals
Аspirants in data engineering and data architecture
Registration
is closed
You can find more opportunities here: