1. Tubelator AI
  2. >
  3. Videos
  4. >
  5. Science & Technology
  6. >
  7. Part 4- Creating the Continuous Integration Pipeline (Part -2) - CI/CD in Azure Databricks in Tamil

Part 4- Creating the Continuous Integration Pipeline (Part -2) - CI/CD in Azure Databricks in Tamil

Available In Following Subtitles
English
Variant 1
Posted on:
#databricks #azuredatabricks #azuredataengineer #azureintamil #azuretutorialforbeginners #databricksintamil #cicd #azuredevops #devops In this Part 4 video of CI/CD in Azure Databricks, I have discussed about the process of the Continuous Integration (CI) pipeline and explained the different YAML code used for creating the pipeline. The next parts will be uploaded soon, stay tuned! Part #5. Creating CI Pipeline (Part 3) Part #6. Creating CD Pipeline and End to End Testing Chapters TimeStamp: 0:00:00 - Intro 0:01:46 - Variable Group 0:05:05 - Pool-VM Image (Compute) 0:13:25 - Environments 0:16:38 - Service Connection 0:23:57 - Parameter Template 0:27:15 - Generating Databricks Token 0:32:10 - Outro Please like, share and subscribe if you like the content and leave your comments below. For contact, Email: [email protected] Instagram: mrk_talkstech_tamil #AzureDatabricks #ApacheSpark #Sparkcompute #clusters #notebooks #magiccommands #machinelearning #ETL #CICD
tubelator logo

Instantly generate YouTube summary, transcript and subtitles!

chrome-icon Install Tubelator On Chrome

Video Summary & Chapters

No chapters for this video generated yet.

Video Transcript

0:00
Now we can understand the code created for CICD pipeline.
0:05
I will tell you the flow of this code.
0:13
This is the main file of CICD pipelines.yaml
0:19
So what I mean by master file is, we create a pipeline in ASHUT devops
0:24
and we create that pipeline using this file
0:30
So this is like a master file
0:32
So we call this file
0:35
we call this deploy notebooks.yaml
0:40
this file is called data breaks token.ps1
0:48
so in total we have 3 files
0:51
one is master file
0:53
another is template file
0:55
and the third is script file
0:57
so we start from master file
1:00
so the important thing is
1:02
this first 2 lines
1:04
you can understand this easily
1:07
Trigger is an option
1:10
Trigger on main
1:11
If there is any change in main branch
1:14
This file will be triggered
1:17
If you update a change in main branch
1:19
This file will be called
1:22
This is the trigger option
1:25
Trigger off main
1:26
This is the syntax of yaml
1:29
Yaml file is in json structure
1:32
This is used to create CACD pipeline in most of the companies
1:38
In ASHU DevOps, we use YAML file to create CACD pipeline
1:44
I think you understood what is Trigger
1:46
Next is Variables
1:49
This is a very interesting thing
1:51
We have to create a variable group in ASHU DevOps
1:57
We use parameters in this
2:01
For example, the name of the dev workspace will be different, the name of the dev resource group will be different, the name of the uat resource group will be different.
2:12
So we create a variable group and within that variable group we create the name of the dev environment.
2:23
Similarly, we create another variable group for the production environment.
2:27
Before explaining the code below, let's see how to create a variable group
2:36
I am going to go to ASHU DevOps
2:40
I am in ASHU DevOps
2:43
In this, there is an option called Pipelines in the left side
2:48
I am clicking on this
2:50
In Pipelines, there is an option called Library
2:53
you can create a variable group in the library
2:58
so in the library there is a option called variable groups
3:02
so we can create the variable groups by using this
3:06
so there is a button called variable group
3:08
so we can click this
3:10
after clicking the name of the variable group is heard
3:13
so we can copy and paste the name of the variable group
shape-icon

Download extension to view full transcript.

chrome-icon Install Tubelator On Chrome