- Tubelator AI
- >
- Videos
- >
- Science & Technology
- >
- Part 3- Creating the Continuous Integration Pipeline (Part -1) - CI/CD in Azure Databricks in Tamil
Part 3- Creating the Continuous Integration Pipeline (Part -1) - CI/CD in Azure Databricks in Tamil
#databricks #azuredatabricks #azuredataengineer #azureintamil #azuretutorialforbeginners #databricksintamil #cicd #azuredevops #devops
In this Part 3 video of CI/CD in Azure Databricks, I have discussed about the process of the Continuous Integration (CI) pipeline and created the setup required for creating the CI pipelines.
The next parts will be uploaded soon, stay tuned!
Part #4. Creating CI Pipeline (Part 2)
Part #5. Creating CD Pipeline and End to End Testing
Chapters
TimeStamp:
0:00:00 - Intro
0:01:23 - CI Pipeline process explained
0:10:17 - Organizing the folder structure needed for CI / CD Pipeline
0:15:23 - Cloning the Repo to VS code
Please like, share and subscribe if you like the content and leave your comments below.
For contact,
Email: [email protected]
Instagram: mrk_talkstech_tamil
#AzureDatabricks #ApacheSpark #Sparkcompute #clusters #notebooks #magiccommands #machinelearning #ETL #CICD
Video Summary & Chapters
No chapters for this video generated yet.
Video Transcript
We have completed the first two sections in CICD and Azure Databricks.
We have seen what is CICD in the first section.
We have understood it with an example.
We have seen the complete environment setup in the last section.
We have seen all the environment setup required for CICD pipeline in the last section.
Now we will see how to create CICD pipeline.
In this section, we will see how to create a continuous integration pipeline
So, we will discuss about the compatibility
In the previous section, we have seen how the CICD pipeline works
We have to understand this with this particular diagram
So, we have the dev environment and production environment
We make changes in the data bricks workspace in the dev environment
Once we complete the changes, we merge in the main branch
We will merge it and the CICD pipeline will be triggered
After that we will take the latest code in the dev environment and deploy it to the production environment
This is the CICD process that we have seen earlier
We are going to see the functionality of the continuous integration pipeline
So, we are going to see what is the repository
So, in the continuous integration pipeline
From this diagram
We have to do one extra step
We will see what is that
So, first
We have done one thing in the last section
What we have done is
With this
Dev Databricks workspace
We have integrated the ASHU DevOps repository
So, now
In the Dev Databricks workspace
There is a folder called repos
we have to move the notebook in that folder
we are going to do the actual work in that
so consider that you are creating a future branch
in the repost location
after creating the future branch
you are adding a new notebook
consider that
after that you create a pull request
and merge the changes in the main
then our CI-CD pipeline will be triggered
same process
so in this, I told you to do continuous integration pipeline as an extra step
what we have to do is
in main, the latest code is there
correct?
so the latest code in main branch
that will be in dev databricks workspace
in the repos location
correct?