Integration of ML and DL with DevOps
Firstly, I’m very glad for making out this project. After spending four complete days on this project finally I made it. In the process of doing this project, I’ve learned many new things and I explored so many great things in order to complete this project. And I’m very thankful to Mr. Vimal Daga sir for making me capable enough to make this project.
Well, now I’ll explain about the project. Automation makes every large task in a simplest way. Now we’ll be doing automation of Machine learning.
Problem Statement
1. Create container image that’s has Python3 and Keras or numpy installed using dockerfile
2. When we launch this image, it should automatically starts train the model in the container.
3. Create a job chain of job1, job2, job3, job4 and job5 using build pipeline plugin in Jenkins
4. Job1 : Pull the Github repo automatically when some developers push repo to Github.
5. Job2 : By looking at the code or program file, Jenkins should automatically start the respective machine learning software installed interpreter install image container to deploy code and start training( eg. If code uses CNN, then Jenkins should start the container that has already installed all the softwares required for the cnn processing).
6. Job3 : Train your model and predict accuracy or metrics.
7. Job4 : if metrics accuracy is less than 80% , then tweak the machine learning model architecture.
8. Job5: Retrain the model or notify that the best model is being created
9. Create One extra job job6 for monitor : If container where app is running. fails due to any reason then this job should automatically start the container again from where the last trained model left
Let’s start with my project. We have to make two python codes for ML and DL as per client requirements. We have to create two different environments for ML as well as DL. In order to create environments we have to make two directories one is ML and other one is for DL . Then two docker files have to created in each directories as shown below. which has installed libraries such as numpy, pandas , sklearn, open-cv and keras. Where train.py has python code for ML and DL in different directories. You can find code and datasets as well in my github using the following link
Now you have to create docker image using the following
— docker build -t ml:v1 . → for machine learning
— docker build -t dl:v1 . → for deep learning
After successful creation of two docker images and now creating the jobs with jenkins
Job1
** sudo cp -rvf * /code
which has to be executed in job1
Job2
** ml=$(sudo cat /root/code/ML/train.py | grep sklearn | wc -l)
dl=$(sudo cat /root/code/DL/train.py | grep keras | wc -l)
if [ $ml -gt 0 ] && [ $dl -eq 0 ]
then
sudo docker run -dit -v /code:/code ml:v1
echo “ML”
elif [ $dl -gt 0 ]
then
sudo docker run -dit -v /code:/code dl:v1
echo “DL”
fi
sudo cp -r /root/code/DL /var/lib/jenkins/workspace/j2
— which has to be executed in job2
Job3
This job checks the accuracy of the model and if it is less than 85 tweeks the code a bit and asks Job 2 to retrain it
** sleep 30
sudo python3 /root/code/DL/upgrade.py
Which has to be executed in job3
creating the pipeline between the jobs