MLOps (Machine Learning Operations) Fundamentals | Quiz Answers

About MLOps (Machine Learning Operations) Fundamentals Course

This course introduces participants to MLOps tools and best practices for deploying, evaluating, monitoring, and operating production ML systems on Google Cloud.

MLOps is a discipline focused on the deployment, testing, monitoring, and automation of ML systems in production. Machine Learning Engineering professionals use tools for continuous improvement and evaluation of deployed models. They work with (or can be) Data Scientists, who develop models, to enable velocity and rigor in deploying the best performing models.

This course is primarily intended for the following participants: Data Scientists looking to quickly go from machine learning prototype to production to deliver business impact. Software Engineers looking to develop Machine Learning Engineering skills.

ML Engineers who want to adopt Google Cloud for their ML production projects.

This is one of the best course to start on ML OPS with GCP. The Concepts were explained neatly throughout the course, and I am sure this would really help you to solve the most complex use cases in deploying ML Models. Thanks, Google for this wonderful course and many appreciations to Qwiklabs for hands-on. Highly recommended for ML Engineers/ Data scientists.

Enroll on Coursera

MLOps (Machine Learning Operations) Fundamentals Quiz Answers

Week 01: MLOps (Machine Learning Operations) Fundamentals

Why and When to Employ MLOps

Q1. In addition to CI/CD practiced by DevOps teams, MLOps introduces:

  • Continuous classification
  • Continuous regression
  • Continuous training
  • All of the above

Q2. MLOps, besides testing and validating code and components, also tests and
validates data, data schemas, and models.

  • True
  • False

Q3. In what order are the following phases executed in a machine learning
project?

I – Selection of ML algorithm

II – Data Exploration

III – Definition of the business use case

IV – Model monitoring

V – Model operationalization

VI – Model Development

  • I, II, III, IV, V, VI
  • III, II, I, VI, V, IV
  • II, III, I, VI, IV, V
  • II, I, III, VI, IV, V

Introduction to Containers

Q1. Which of these problems are containers intended to solve? mark all that are correct ( 3 correct answers).

  • Packaging applications in virtual machines can be wasteful.
  • Some developers need parts of their applications to be Linux-based while other parts are
    Windows-based.
  • Applications need a way to isolate their dependencies from one another.
  • It’s difficult to troubleshoot applications when they work on a developer’s laptop but fail in production

Containers and Container Images

Q1. Why do Linux containers use union file systems?

  • To efficiently encapsulate applications and their dependencies into a set of clean, minimal layers
  • To control what an application’s ability to see parts of the directory tree and IP addresses
  • To give a container its own virtual memory address space
  • To control an application’s maximum consumption of CPU time and memory

Q2. What is significant about the topmost layer in a container? Choose all that
are true (2 correct answers).

  • Reading from or writing to the topmost layer requires special software libraries.
  • An application running in a container can only modify the topmost layer.
  • Reading from or writing to the topmost layer requires special privileges.
  • The topmost layer’s contents are lost when the container is no longer running.

Introduction to Kubernetes

Q1. When you use Kubernetes, you describe the desired state you want, and Kubernetes’s job is to make the deployed system conform to your desired state and to keep it there in spite of failures. What is the name for this management approach?

  • Virtualization
  • Declarative configuration
  • Imperative configuration
  • Containerization

Q2. What is a stateful application?

  • An application that requires data to be stored persistently
  • A web front end
  • An application that is not containerized

Introduction to Google Kubernetes Engine

Q1. What is the relationship between Kubernetes and Google Kubernetes Engine?

  • Google Kubernetes Engine is Kubernetes as a managed service.
  • Google Kubernetes Engine is a closed-source variant of Kubernetes.
  • Kubernetes and Google Kubernetes Engine are two names for the same thing.

Q2. What is the name for the computers in a Kubernetes cluster that can run your workloads?

  • Masters
  • Container images
  • Nodes
  • Containers

Q3. Which of the following supports scaling a Kubernetes cluster as a whole?

  • Google Kubernetes Engine
  • Compute Engine
  • Kubernetes

Containers and Kubernetes in Google Cloud

Q1. You are choosing technology for deploying applications, and you want to deliver them in lightweight, standalone, resource-efficient, portable packages. Which choice best meets those goals?

  • Virtual Machines,
  • Hypervisors,
  • Containers
  • Executable files

Q2. You are classifying a number of your applications into workload types. Select the stateful applications in this list of applications. Choose all responses that are correct (2 correct responses).

  • A gaming application that keeps track of user state persistently
  • Web server front end for your inventory system
  • A shopping application that saves user shopping cart data between sessions
  • Image recognition application that identifies product defects from images,

Q3. Google Compute Engine provides fine-grained control of costs. Which Compute Engine features provide this level of control?

  • Fully customizable virtual machines
  • Managed instance groups
  • Per-second billing
  • Billing budgets and alerts
  • Autoscaling groups

Q4. You are developing a new solution and want to explore serverless application solutions. Which Google Cloud compute services provide serverless compute resources that you can use with containers?

  • Cloud Functions,
  • Compute Engine,
  • App Engine
  • Google Kubernetes Engine,

Q5. You are deploying a containerized application, and you want maximum control over how containers are configured and deployed. You want to avoid the operational management overhead of managing a full container cluster environment yourself. Which Google Cloud compute solution should you choose?

  • Google Kubernetes Engine
  • Cloud Functions
  • Compute Engine
  • App Engine

Kubernetes Concepts

Q1. What is the difference between a pod and a container?

  • A pod contains one or more containers.
  • A container contains one or more pods.
  • Pods and containers are two names for the name thing.
  • The Kubernetes Control Plane

Q2. Which master control plane component is the only one with which clients interact directly?

  • kube-controller-manager
  • kube-scheduler
  • etcd
  • kube-apiserver

Q3. Which master control plane component is the cluster’s database?

  • kube-controller-manager,
  • etcd
  • kube-scheduler
  • kube-apiserver

Q4. What is the role of the kubelet?

  • To maintain network connectivity among the Pods in a cluster,
  • To interact with underlying cloud providers
  • To serve as Kubernetes’s agent on each node

Google Kubernetes Engine Concepts

Q1. In GKE clusters, how are nodes provisioned?

  • As Compute Engine virtual machines
  • As abstract parts of the GKE service that are not exposed to GCP customers

Q2. In GKE, how are masters provisioned?

  • As Compute Engine virtual machines
  • As abstract parts of the GKE service that are not exposed to GCP customers

Q3. What is the purpose of configuring a regional cluster in GKE?

  • To allow applications running in the cluster to withstand the loss of a zone
  • To ensure that the cluster’s workloads are isolated from the public Internet

Deployments

Q1. What is the relationship between Deployments and ReplicaSets?

  • There is no relationship; in modern Kubernetes, Replication Controllers are typically used to maintain a set of Pods in a running state.
  • A ReplicaSet configures a Deployment controller to create and maintain a specific version of the Pods that the Deployment specifies.
  • A Deployment configures a ReplicaSet controller to create and maintain a specific version of the Pods that the Deployment specifies.
  • A Deployment configures a ReplicaSet controller to create and maintain all the Pods that the Deployment specifies, regardless of their version.

Q2. What type of application is suited for use with a Deployment?

  • Stateless
  • Stateful
  • Batch
  • Written in Go

Updating Deployments

Q1. You want to have two versions of your application in production, but be able to switch all traffic between them. This is an example of which deployment strategy?

  • Canary deployment
  • Blue-green deployment
  • Rolling updates

Q2. You want to have two versions of your application in production, but be able to a small percentage of traffic to the newer version as a gradual test. This is an example of which deployment strategy?

  • Rolling updates,
  • Blue-green deployment
  • Canary deployment

Q3. In a rolling update strategy, you can define the “max unavailable” parameter as a percentage. A percentage of what?

  • The total number of Pods in the cluster.,
  • The total number of Pods in the new ReplicaSet.
  • The total number of Pods across all ReplicaSets.

Jobs

Q1. What happens if a node fails while a Job is executing on that node?

  • Kubernetes will restart the Job on a node that is still running.
  • Kubernetes will abandon the Job.
  • Kubernetes will wait for the node to return to service and then restart the Job.

Q2. Suppose you have a Job in which each Pod performs work drawn from a work queue. How should this Job’s manifest be configured?

  • Specify a parallelism value and leave spec.completions unset
  • Specify a WorkQueue object to let Kubernetes query the state of the queue.
  • Specify a spec.completions value and leave the parallelism value unset

Week 02: MLOps (Machine Learning Operations) Fundamentals

AI Platform Pipelines

Q1. One major benefit of the Lineage tracking feature of AI Platform pipelines is:

  • To track artifacts for an ML pipeline
  • To provide history and versions of your ML model
  • To share of ML pipeline within your organization

Q2. The AI Hub allows you to (select all that apply)

  • Search and discover various ML pipelines
  • Publish your pipelines and share within your organization
  • Network with your peers and share updates on your work
  • Experiment and adjust out-of-the-box pipelines to custom use cases via pipelines UI.

Q3. Which of the following services can be used out-of-the-box to operationalize xgboost model?

  • Kubeflow
  • TFX
  • Honor Code Agreement

Training, Tuning and Serving on AI Platform

Q1. Which command allows you to split your dataset to get 70% of it for training in a repeatable fashion?

  • RAND() < 0.7
  • REPEAT(RAND() < 0.7)
  • MOD(RAND() < 0.7)
  • MOD(ABS(FARM_FINGERPRINT(field)),10) < 7

Q2. Hyperparameter tuning happens before model training and is the task responsible for assigning initial weights to the variables (or parameters) which allow the model to find patterns on the data.

  • True
  • False

Q3. Which of the following is an INCORRECT statement about Dockerfile commands?

  • The FROM command should be the first command in a Dockerfile file.
  • The ENTRYPOINT command specifies the name of the container so it can be found in the Container Registry.
  • The RUN statement, followed by standard bash code, is used to provision the image with all the tools and libraries needed to run the training code.
  • The WORKDIR command specifies what the current working directory should be when the container is executed.

Q4. What is the order of steps to push a trained model to AI Platform for serving?

I – Run the command gcloud ai-platform versions create {model_version} to create a version for
the model.
II – Train and save the model.
III – Run the command gcloud ai-platform models create to create a model object.
IV – Run the command gcloud ai-platform predict to get predictions.

  • II, III, I, IV
  • I, II, III, IV
  • III, II, IV, I
  • II, I, III, IV

Week 03: MLOps (Machine Learning Operations) Fundamentals

Kubeflow Pipelines on AI Platform

Q1. Kubeflow tasks are organized into a dependency graph where each node represents.

  • a task to be performed
  • a Docker container to be executed
  • a Dataflow job

Q2. The simplest way to launch a training task on AI platform from a Kubeflow task is

  • To package your AI platform training script into a Docker container
  • Use the AI platform training pre-built Kubeflow component
  • Launch the training job from the Kubeflow UI

Q3, In a lightweight Python component, the run parameters are taken from

  • The variables from the Python class we are wrapping into a Kubeflow task
  • The arguments of the Python function we are wrapping into a Kubeflow task

CI/CD for a Kubeflow Pipeline

Q1. Which of the following would require a Custom Cloud Builder:

  • Deploying an application to a Google Kubernetes Engine cluster
  • Pushing a Docker container to a registry
  • Copying a local file to Google Cloud Storage with gsutil
  • Downloading source code or packages from external locations

Q2. Which field in a configuration file allows the use of persistence (asset sharing):

  • name
  • dir
  • env
  • args

More Quiz Answers

Security Best Practices in Google Cloud | Quiz Answers

Managing Security in Google Cloud | Quiz Answers

Leave a Reply

Your email address will not be published. Required fields are marked *