1. Tubelator AI
  2. >
  3. Videos
  4. >
  5. Science & Technology
  6. >
  7. Part 3- Creating the Continuous Integration Pipeline (Part -1) - CI/CD in Azure Databricks in Tamil

Part 3- Creating the Continuous Integration Pipeline (Part -1) - CI/CD in Azure Databricks in Tamil

Available In Following Subtitles
English
Variant 1
Posted on:
#databricks #azuredatabricks #azuredataengineer #azureintamil #azuretutorialforbeginners #databricksintamil #cicd #azuredevops #devops In this Part 3 video of CI/CD in Azure Databricks, I have discussed about the process of the Continuous Integration (CI) pipeline and created the setup required for creating the CI pipelines. The next parts will be uploaded soon, stay tuned! Part #4. Creating CI Pipeline (Part 2) Part #5. Creating CD Pipeline and End to End Testing Chapters TimeStamp: 0:00:00 - Intro 0:01:23 - CI Pipeline process explained 0:10:17 - Organizing the folder structure needed for CI / CD Pipeline 0:15:23 - Cloning the Repo to VS code Please like, share and subscribe if you like the content and leave your comments below. For contact, Email: [email protected] Instagram: mrk_talkstech_tamil #AzureDatabricks #ApacheSpark #Sparkcompute #clusters #notebooks #magiccommands #machinelearning #ETL #CICD
tubelator logo

Instantly generate YouTube summary, transcript and subtitles!

chrome-icon Install Tubelator On Chrome

Video Summary & Chapters

No chapters for this video generated yet.

Video Transcript

0:00
We have completed the first two sections in CICD and Azure Databricks.
0:06
We have seen what is CICD in the first section.
0:10
We have understood it with an example.
0:13
We have seen the complete environment setup in the last section.
0:18
We have seen all the environment setup required for CICD pipeline in the last section.
0:24
Now we will see how to create CICD pipeline.
0:28
In this section, we will see how to create a continuous integration pipeline
0:40
So, we will discuss about the compatibility
0:42
In the previous section, we have seen how the CICD pipeline works
0:49
We have to understand this with this particular diagram
0:53
So, we have the dev environment and production environment
0:57
We make changes in the data bricks workspace in the dev environment
1:03
Once we complete the changes, we merge in the main branch
1:07
We will merge it and the CICD pipeline will be triggered
1:11
After that we will take the latest code in the dev environment and deploy it to the production environment
1:18
This is the CICD process that we have seen earlier
1:23
We are going to see the functionality of the continuous integration pipeline
1:29
So, we are going to see what is the repository
1:31
So, in the continuous integration pipeline
1:33
From this diagram
1:35
We have to do one extra step
1:37
We will see what is that
1:40
So, first
1:41
We have done one thing in the last section
1:43
What we have done is
1:46
With this
1:47
Dev Databricks workspace
1:49
We have integrated the ASHU DevOps repository
1:52
So, now
1:53
In the Dev Databricks workspace
1:55
There is a folder called repos
1:56
we have to move the notebook in that folder
2:01
we are going to do the actual work in that
2:06
so consider that you are creating a future branch
2:09
in the repost location
2:10
after creating the future branch
2:11
you are adding a new notebook
2:14
consider that
2:15
after that you create a pull request
2:19
and merge the changes in the main
2:22
then our CI-CD pipeline will be triggered
2:26
same process
2:27
so in this, I told you to do continuous integration pipeline as an extra step
2:33
what we have to do is
2:35
in main, the latest code is there
2:38
correct?
2:39
so the latest code in main branch
2:41
that will be in dev databricks workspace
2:45
in the repos location
2:46
correct?
shape-icon

Download extension to view full transcript.

chrome-icon Install Tubelator On Chrome