Cloud Networking Coursera Quiz Answers – Networking Funda

All Weeks Cloud Networking Coursera Quiz Answers

In the cloud networking course, we will see what the network needs to do to enable cloud computing. We will explore current practice by talking to leading industry experts, as well as looking into interesting new research that might shape the cloud network’s future.

This course will allow us to explore in-depth the challenges for cloud networking—how do we build a network infrastructure that provides the agility to deploy virtual networks on a shared infrastructure, that enables both efficient transfer of big data and low latency communication, and that enables applications to be federated across countries and continents? Examining how these objectives are met will set the stage for the rest of the course.

Enroll On Coursera

Cloud Networking Week 1 Quiz Answers

Quiz 1: Orientation Quiz

Q1. This course lasts for ___ weeks.

  • 5
  • 6
  • 8
  • 10

Q2. I am required to purchase a textbook for this course.

  • False
  • True

Q3. Do you need prior Python coding experience to succeed in this course?

  • Yes
  • No

Q4. How long should it take you to finish all of the programming assignments?

  • 1 hour
  • 2 hours
  • 3 hours
  • 4 hours

Q5. Which of the following activities are required to pass the course? Check all that apply.

  • Project Milestones
  • Quizzes
  • Programming Assignments
  • Posting in the forums

Q6. The following tools will help me use the discussion forums:

  • “Up-voting” posts that are thoughtful, interesting, or helpful
  • “Tagging” my posts with keywords other students might use in searching the forums
  • Subscribing to any forums that are particularly interesting to me
  • All of the answers are correct.

Q7. If I have a problem in the course I should:

  • Email the instructor
  • Report it to the Learner Help Center (if the problem is technical) or to the Content Issues forum (if the problem is an error in the course materials).
  • Call the instructor
  • Drop the class

Quiz 2: Cloud Intro Quiz

Q1. Which of the following can you conclude from the probability distribution function of flow sizes shown here? The y-axis is the probability of a flow to have the flow size on the x-axis.

  • 90% of bytes are in flows at most 1000 kilobytes in size
  • 70% of flows are at most 1000 kilobytes in size
  • 70% of the bytes are in flows at most 1000 kilobytes in size
  • None of the above

Q2. What can you conclude from the cumulative distribution function of flow sizes shown below? (Being able to read and understand such plots will be useful later in the course.)

  • 50% of flows are 10 kilobytes in size
  • 70% of flows are at most 1000 kilobytes in size
  • 70% of the bytes are in flows at most 1000 kilobytes in size
  • None of the above

Q3. Suppose you have a large pool of identical servers hosting a replicated service. Further, assume that the request-response time for the service is random, with the following distribution: 10ms for 99.9% of the requests but 1 second for the remaining 0.1%. Assume requests are completely independent. If you make 100 requests in a batch in parallel, what is the probability that your batch of requests takes 1 second?

  • ~37%
  • ~63%
  • ~20%
  • ~10%

Q4. In real systems, as opposed to the hypothetical setting in Question 3, what might cause requests to not be independent?

  • Under high load at servers, service times for requests may be higher
  • Network congestion from other requests may increase service times.
  • All of the above

Q5. How many servers does a complete fat-tree topology (as in this paper: A Scalable, Commodity Data Center Network Architecture) built using switches with 6 ports (i.e., k = 6) support?

  • 36
  • 45
  • 120
  • 54

Q6. Is the fat-tree topology (as in this paper: A Scalable, Commodity Data Center Network Architecture) a tree, i.e., does it have a loop-free structure?

  • Yes
  • No

Q7. How many paths exist in a fat-tree topology (as in this paper: A Scalable, Commodity Data Center Network Architecture) from an arbitrary root switch downward to an arbitrary server? Consider only minimal-length paths.1 point

Possibly 0; not all root switches can reach all servers!

  • 1
  • 2
  • 4

Q8. With the switches marked with “X” eliminated, are these two topologies identical? For example, would they have identical failure responses?

  • Yes
  • No

Q9. How does the number of cables in a fat-tree topology (as in this paper: A Scalable, Commodity Data Center Network Architecture) scale with port-count of the individual switches used to build it, i.e., the parameter k? Ignore constant factors.

  • k4
  • k2
  • k3
  • k

Q10. What are some of the benefits of being able to deploy larger clusters of servers connected at high bandwidth? (Feel free to refer to this week’s readings.)

  • Applications need not be restricted to within-rack deployment and can achieve higher scale.
  • Larger clusters with high bandwidth improve the ability to pack jobs into the servers – in more restricted environments, under-utilized servers can be isolated from each other, making it difficult to use them together for tasks. This leads to under-utilized, fragmented infrastructure.
  • Large clusters with high bandwidth allow spreading data across different failure zones (for example, across rows of racks which have different power supplies).
  • All of the above

Quiz 3: Programming Assignment 1 a Survey (optional)

Q1. Excluding time for downloading the VM and any associated software, how long did Programming Assignment 1A take you to finish? (Please answer in number of minutes.)

Cloud Networking Week 2 Quiz Answers

Quiz 1: Cloud Routing and Congestion Control

Q1. The fat-tree network below has a total of 32 switch-switch links. (Verify that this is true.) How many of these links will be used by a tree spanning all the switches? Remember: we’re excluding server-switch links in both counts and are counting cables, not directed edges.

  • 19
  • 32
  • 26
  • 24

Q2. Assuming that there’s no other traffic in the network, how much aggregate bandwidth is available from the left-most pod (the left-most four servers) to any other pod in the fat-tree below with the switches marked with X having failed? Count only bandwidth in the direction from the left-most pod to the other; assume links are all unit capacity and that we have perfect multipath routing.

Depends on what the destination pod is

  • 4 units
  • 2 units
  • 1 unit

Q3. How does TRILL enable the use of all the links in the network?

  • Every packet is sent to a controller that decides the routes
  • By running a link-state protocol between the switches
  • By simply flooding packets on all outgoing links

Q4. In BGP, the path announcements nodes make to neighbors contain:

  • The entire network topology as seen by the node
  • Only the path length to the destination
  • The autonomous system-level path from the node to the destination
  • Only the next hop autonomous system towards the destination

Q5. How does ECMP avoid packet reordering?

  • By only using one path between the same source and destination servers
  • By adding delays to packets sent on different paths in such a way that reordering is unlikely
  • By consistently using the same path for packets of the same flow

Q6. Between a specific pair of servers, how many ECMP paths can be set up for the fat-tree topology (as in this paper: A Scalable, Commodity Data Center Network Architecture) built with switches with 10 ports each, i.e., k = 10? Ignore any switch hardware constraints on ECMP; we’re only looking for the number of equal-cost shortest paths.

  • 100
  • 50
  • 25
  • 20

Q7. How does CONGA do forwarding on multiple paths?

  • It uses a centralized controller to direct all traffic.
  • Congestion is monitored on the various paths between pairs of leaf switches, with the least congested paths being chosen.
  • It’s the same as ECMP.
  • It uses a link state protocol across all the switches.

Q8. What are flowlets?

  • Flowlets are the same as flows except they have fewer packets. Different flowlets must always belong to different flows.
  • Flowlets are packet-groups in the same flow that are separated by large enough gaps from other packet-groups such that sending them along different paths has a reasonably small likelihood of reordering a flow’s packets.
  • Flowlets are cute little flows with fur.

Q9. How do containers (for example, with Docker) typically differ from virtual machines?

  • Each virtual machine runs its own operating system, while containers may depend on the underlying system’s kernel and other resources.
  • Applications can be packaged into containers needing much smaller disk and memory footprint than with virtual machines.
  • Containers can be instantiated much faster than virtual machines.
  • All of the above

Q10. At 100Gbps, with all packets being 100 Bytes in size (including the preamble, etc.), how much time can we afford to spend on processing each packet to be able to process packets at line-rate, sequentially?

  • 8 nanoseconds
  • 67 nanoseconds
  • 10 nanoseconds
  • 1 nanosecond

Q11. How does SR-IOV help with the difficulty of packet processing at line rates?

  • All of the above
  • Each virtual machine has hardware resources on the NIC from where it reads packets meant for it.
  • It uses a simple fast packet classifier on the NIC itself.
  • Packets are passed to a VM’s memory from the NIC using direct memory access, without invoking the hypervisor.

Q12. What are some drawbacks of SR-IOV’s approach?

  • A. It cannot achieve anywhere near line-rate performance.
  • B. Migration of VMs is trickier because their network state is tied to the NIC.
  • C. The simple on-NIC packet classifier does not provide general purpose packet forwarding logic.
  • D. Options A, B, and C
  • E. Options B and C but not A
  • F. Options A and C but not B

Q13. Which of these design decisions enables Open vSwitch to provide fast-enough performance for common workloads while allowing general purpose packet forwarding logic?

  • Division of user-space and kernel-space tasks: user-space handles network state updates and processes the first packet of flows, while kernel-space is optimized for fast forwarding of future packets.
  • The kernel-space packet classifier is a simplified, collapsed version of rules applicable to packets seen recently, thus making processing faster.
  • An even simpler cache allows constant-time lookups mapping incoming packets to the applicable forwarding entry in the kernel-space classifier.
  • All of the above

Q14. Assuming that TCP is in the additive increase phase and the congestion window is X bytes, what will the congestion window be after the sender detects a single packet loss?

  • 2 packets
  • X/2
  • X/4
  • X

Q15. What problems does TCP’s reaction to loss pose?

  • Loss is a poor signal of persistent congestion. Losses can also occur due to other transient factors.
  • Multiplicative decrease can be too aggressive – perhaps the sender’s rate is only marginally larger than the available capacity.
  • Waiting until a loss before reacting to congestion means waiting until buffer occupancy is large. This increases queuing latencies.
  • All of the above

Q16. Assume that data travels over fiber at a speed of 2c/3, where c is the speed of light in a vacuum. What is the (one-way) propagation delay across 300 meters of fiber running across a data center floor? (This might seem annoying, but the point of such  s is to make sure you have a better sense of these timescales.)

  • 1.5 microseconds
  • 30 microseconds
  • 200 microseconds
  • 200 nanoseconds

Q17. Assuming a line rate of 10Gbps, what time elapses between a packet arriving in a buffer with five packets already queued (each packet being 9000 bytes in size), and reaching the head of the queue (i.e., just before its bytes start to get sent across the wire)?

  • 45 microseconds
  • 108 microseconds
  • 4.5 microseconds
  • 36 microseconds

Q18. In data centers, queuing delay caused by just a few packets can exceed the propagation delay. What problem(s) does this pose?

  • Large buffer occupancies can inflate end-to-end latencies by orders of magnitude.
  • It doesn’t matter; we are still talking about microseconds.
  • It leads to excessive packet reordering.
  • None of the above

Q19. Suppose that to address TCP incast, we reduce the TCP connection timeout value. Which of the following is true?

  • While this may help increase throughput if some of the TCP connections were timing out due to losses, latencies can still be high and variable because of large buffer occupancies.
  • While this helps decrease latencies, throughput can still be very small.
  • It doesn’t increase throughput or decrease queuing latency.

Q20. How does DCTCP try to provide both low latency (i.e., keeping buffer occupancies small) and high throughput?

  • It marks packets with congestion signals as the buffer starts to fill beyond a threshold. If the threshold is low, this signals congestion much before the buffer is full.
  • The senders adjust their rate in proportion to the number of congestion markings they receive, unlike TCP’s multiplicative decrease.
  • Small buffer occupancies also allow buffers to absorb traffic bursts, thus reducing packet losses and helping maintain high throughput.
  • All of the above

Q21. If the congestion window at a sender is X bytes and it receives congestion markings based on which fraction of marked packets is updated to 0.4 (i.e., the α calculated per equation (1) in the DCTCP paper – Data Center TCP (DCTCP) is 0.4), what is the congestion window adjusted to?

  • 0.8 X
  • 0.6 X
  • 0.5 X
  • None of the above

Q22. What considerations govern the setting of DCTCP’s congestion marking threshold, K?

  • A. If it’s too large, buffer occupancy will still be high.
  • B. If it’s too small, throughput can be low because congestion is signaled more often than necessary.
  • C. Both A and B
  • D. Neither A nor B

Cloud Networking Week 3 Quiz Answers

Quiz 1: Software Defined Networking

Q1. Key characteristics of software-defined networks include (check all that apply):

  • An API interface to the data plane of switches
  • A fully distributed peer-to-peer control plane
  • A programmable, logically centralized network controller

Q2. In traditional non-virtualized data centers, moving services (or VMs or tenants) between machines can be difficult because (check all that apply):

  • The performance of the running service can differ significantly if it is moved outside its primary rack or cluster.
  • Some services need to be kept within a single Layer 2 domain.
  • IP addresses may need to be reassigned.

Q3. The VL2 paper argues that if the data center has a relatively large number of relatively small flows, this is a good match for VL2’s design because (check all that apply):

  • Other solutions that dynamically change the routing of flows based on current congestion could theoretically perform better. However, it would be difficult to make these schemes practical because the overall network traffic pattern changes very quickly.
  • Individual flows may get unlucky by being (randomly) assigned to a path that causes congestion, but if this happens, it will be resolved quickly because the flow will last for only a relatively short time.
  • The total traffic volume (number of bytes sent) is small, so it will fit within the Clos network’s capacity.

Q4. VL2’s use of Valiant Load Balancing is an example of:

  • Oblivious routing of traffic, meaning that the selected path for a flow does not depend on the current traffic flows or link utilizations
  • Adaptive routing of traffic, meaning the selected path for a flow is determined (at least in part) by current traffic flows or link utilizations

Q5. Ignoring VL2 for a moment, if we use only ECMP in a data center network, this is an example of:

  • Adaptive routing
  • Oblivious routing

Q6. Recall the CONGA data center architecture from last week, or read the introduction of the paper now (CONGA: Distributed Congestion-Aware Load Balancingfor Datacenters). CONGA’s traffic routing is an example of

  • Oblivious routing
  • Adaptive routing

Q7. Consider the network below, where all links are 10 Gbps. Assume the data center is using shortest path routing with ECMP. Four of tenant A’s VMs {A1, A2, A3, A4} each have a long-running TCP flow sending to Ad; likewise, four of tenant B’s VMs {B1, B2, B3, B4} are sending to Bd. In the next several  s, we will explore whether, in a data center network like VL2, the performance of tenant A and tenant B can interfere with each other.

.

The paths taken by each flow are not shown and depend on ECMP hashing. With the worst possible hashing, assuming TCP makes optimal use of the available bandwidth, approximately what average data rate will flow A1 –> Ad receive?

  • 1.25 Gbps
  • 3.33 Gbps
  • 2.5 Gbps
  • 10 Gbps
  • 5 Gbps
  • 1 Gbps

Q8. The flow A1 –> Ad has multiple possible paths, depending on how it is hashed by ECMP. How many of these paths go through switch S1?

  • 2
  • 1
  • 4
  • 8

Q9. Suppose the flows from Ai and Bi happen to go through switch Si for each i in {1,2,3,4}. Approximately what average data rate will flow A1 –> Ad receive?

  • 5 Gbps
  • 10 Gbps
  • 3.33 Gbps
  • 1.25 Gbps
  • 1 Gbps
  • 2.5 Gbps

Q10. Suppose the flow routing is as in the previous  . But now, B’s senders are not running a stable, converged TCP; they are sending a burst of UDP traffic at a fixed rate of 4 Gbps each. Approximately what average data rate will flow A1 –> Ad receive?

  • 1 Gbps
  • 10 Gbps
  • 2.5 Gbps
  • 1.25 Gbps
  • 5 Gbps
  • 3.33 Gbps

Q11. The scenario in the previous   falls outside the case where VL2 claimed to provide performance isolation, because (check all that apply):

  • Some flows are not running TCP and did not back off when encountering congestion
  • The Clos network is not nonblocking
  • The traffic does not conform to the “hose model”: the injected traffic sent to Bd exceeds its line rate

Q12. VL2’s goal is the “agility” of allowing any server to be assigned to any service. To enable service mobility across the physical data center network, in VL2:

  • Application services are rewritten to depend only on Layer 3 addresses (“AAs”) rather than Layer 2 addresses and features, so they can move across L2 domains.
  • Two separate IP address spaces are used – one to identify an application, one to identify its location – with a dynamic translation step between these.
  • Application services are rewritten to depend only on domain names rather than L2 or L3 addresses, so they can move to any location-dependent IP address.

Q13. Let’s get more specific. In VL2, an Application Address (AA) identifies:

  • A server or virtual server
  • A rack or top-of-rack (ToR) switch
  • A running process on a server
  • A transport connection on a server

Q14. In VL2, a Locator Address (LA) identifies

  • A running process on a server
  • A transport connection on a server
  • A rack or top-of-rack (ToR) switch
  • A server or virtual server

Q15. When a service moves in VL2:

  • Its AA stays the same, but its LA may change.
  • Its AA and LA both stay the same.
  • Its AA may change, but its LA stays the same.
  • Its AA and LA both may change.

Q16. Suppose a NVP data center has one tenant with 10 VMs and one with 100 VMs. Each VM is on a separate physical server. Each tenant allows any communication within its virtual network and no external communication. How many total unidirectional tunnels will be constructed in the data center?

  • 108
  • 110
  • 216
  • 9,990
  • 10,100

Q17. In the same situation as the previous  , how many total instances of tenants’ logical datapaths will exist throughout the network?

  • 10,100
  • 9,990
  • 220
  • 216
  • 108
  • 110

Q18. Can NVP enforce microsegmentation, i.e., access control across tenants?

  • No, this is handled by the physical network infrastructure (physical switches and routers) rather than at the virtualization layer.
  • Yes, unauthorized packets will either be dropped at some step in the logical datapath, or the address of the remote tenant may be private, so there will be no address to send to in the first place.
  • No, it can enforce segmentation (securing communication outside the data center) but not communication between tenants.
  • Yes, unauthorized packets will be sent to the central controller where they will be dropped.

Q19. NVP, as described in the paper, can directly guarantee (check all that apply):

  • Location-independent addressing
  • Layer 2 network semantics
  • Segmentation across tenants
  • Performance uniformity regardless of where VMs are moved

Cloud Networking Week 4 Quiz Answers

Quiz 1: Cloud WAN Connectivity

Q1. You’re provisioning a WAN link connecting two data centers. You run two services over this link – one (red) is latency critical, and the other (blue) is comprised of background latency insensitive tasks. The plot below shows the link’s utilization (y-axis) over time (x-axis) under your present provisioning. In the second time slot, the link is fully utilized, with 40% of capacity used by the latency critical service and 60% for the background traffic. Assume that this is a zero-loss situation – the traffic in this position precisely matches link capacity. Now, supposing you had the freedom to schedule traffic for the two services (without delaying latency-critical traffic), with what fraction of the link’s capacity could you still operate your WAN without any losses for either service?

  • 0.4
  • 0.2
  • 0.6
  • 1

Q2. What are some of the benefits of having multiple geo-distributed data centers?

  • Data availability in case of a natural catastrophe at one site
  • Lower latency for clients in multiple regions
  • Load balancing across sites
  • All of the above

Q3. Which of the following techniques does Google’s B4 use for scalability?

  • It uses a completely distributed design without any centralized control.
  • It aggregates links as well as traffic flows between pairs of sites.
  • Each WAN site is restricted to communicate with only two other sites at a time.
  • All of the above

Q4. In the below network scenario, flows for three different applications – blue, green (also shown using a dashed arrow), and red – compete for capacity. Assume all links have unit capacity and there is no other traffic in the network. Assume the two flows for the red application have different sources and destinations and that the bottlenecks aren’t at the sources and destinations for any of the applications. If the network enforces link-level fairness at all links, how much capacity do the three applications achieve?

  • Red: 1 unit, Blue: 0.5 units, Green: 0.5 units
  • Red: 0.5 units, Blue: 0.5 units, Green: 0.5 units
  • Red: 1 unit, Blue: 1 unit, Green: 1 unit
  • Red: 0.5 units, Blue: 1 unit, Green: 1 unit

Q5. What is a TCP cookie?

  • A delicious cookie, delivered reliably, and requiring acknowledgement (praise the baker)
  • A cryptographic token included in a client’s request to a server, authenticating the client to the server and allowing the server to send data without completing a 3-way TCP handshake with the client
  • A TCP option storing old connection parameters, such as congestion window size

Q6. You’re sending 100 kilobytes of data from Chicago, USA to New Delhi, India (a distance of roughly 12,000 kilometers). Assume that the bottleneck bandwidth on the route is x Gbps. How long does it take?

  • Even if the Internet operated at the speed of light in a vacuum, at least 40 milliseconds
  • 0.8 milliseconds for x=1, 8 milliseconds for x=0.1

Q7. In which of these cases does caching of web data not help?

  • Dynamic data (e.g., stock prices)
  • Personalized data
  • Encrypted data
  • All of the above

Q8. Suppose a small interaction between a client and server requires a total of 6 round-trips – (1) SYN and SYN-ACK, (2) ACK and request from the client, (3) the server starts by sending a small initial congestion window and gets an ACK, followed by three more such data-ACK cycles until the client has all the data, followed by a connection termination by the server. The client is in China (on the extreme right in the figure), and the server in the US (extreme left); the path between them is shown as well. Without any CDN involvement, with the interaction taking place directly between the client and the server, how long does it take? Assume that all the link-latencies shown in the figure are round-trip.

  • 1.2 seconds
  • 2.4 seconds
  • 3 seconds

Q9. In the same scenario as above, now imagine that the CDN node close to the client terminates the client’s connection to the server and instead uses a persistent connection to the server to fetch all the data in one interaction with the server, which it then passes on to the client in the usual slow-start manner. The client thus experiences the same set of protocol and data messages but only with the close edge server. How long does the interaction take now?

  • 2.4 seconds
  • 0.55 seconds
  • 1.2 seconds
  • 3 seconds

Q10. In the figure below, the latencies between some pairs of locations along the BGP exposed paths are shown. Will overlay routing help reduce latency here for any pair of nodes?

It depends on BGP policy.

  • Yes
  • No

Q11. How does 8.8.8.8 achieve low latency from several different parts of the world? A single physical server cannot be within 10ms of both Singapore and New York.

  • It uses DNS to achieve this.
  • It uses IP anycast – the same IP address is announced to Internet peers from multiple locations, whereby clients in different locations reach different close-by instances of the service, even though they’re sending packets to the same address.
  • It’s a powerful and ancient dark magic, incomprehensible to most.

Q12. What factors might govern the choice of a CDN node to serve a request? (Check all that apply)

  • Tarot cards and astrology
  • No other factors except physical distance are important
  • Estimated latency to the client from different locations
  • Load at different CDN locations

Q13. Name resolution entries are cached in local DNS servers close to clients. What settings are appropriate for time-to-live (TTL) for name resolution cache entries for CDNs?

  • Long TTLs for both entries pointing to a specific cluster and for those pointing to the CDN resolver (which is further responsible for resolving names to a cluster)
  • Short TTL for entry pointing to a specific cluster, longer for the entry pointing to the CDN resolver (which is further responsible for resolving names to a cluster)
  • Long TTL for entry pointing to a specific cluster, shorter for the entry pointing to the CDN resolver (which is further responsible for resolving names to a cluster)
  • Short TTLs for both entries pointing to a specific cluster and for those pointing to the CDN resolver (which is further responsible for resolving names to a cluster)

Q14. Why might services like OpenDNS and GoogleDNS interfere with CDN operation?

  • Using such services can mean that the client and its DNS resolver are not located near each other, and thus, the CDN’s estimates for performance for the client can be incorrect.
  • Using such services allows even better CDN performance.
  • Use of such services has no relationship with CDN performance for clients.

Q15. Suppose you have a large pool of identical servers hosting a replicated service. Further, assume that the request-response time for the service is random, with the following distribution: 10ms for 99.8% of the requests, but 1 second for the remaining 0.2%. Assume requests are completely independent. If you make 100 requests in a batch in parallel, what is the probability that your batch of requests takes 1 second?

  • ~37%
  • ~63%
  • ~18%
  • ~10%

Q16. Now, let’s try to alleviate the problem we see above – the above chance of a request-batch taking 1s is very poor! In the same scenario, suppose that to speed up our request-batch, we start a timer immediately after issuing the 100 requests in the batch and then replicate all the queries for which we don’t get responses within 10ms. What is the likelihood that we make 3 or more replicated requests? Assume that we only make one batch of replicated queries and don’t do the replication for the replicated queries themselves, etc. Also, you can use this binomial calculator tool if you like.

  • ~10%
  • ~3%
  • ~1%
  • ~0.1%

Q17. Further, in the above scenario (with the replication after 10ms), let’s assume that we have to replicate exactly 3 requests after 10ms. What is the likelihood that our batch of requests still takes 1 second?

  • ~0.6%
  • ~18%
  • ~9%
  • ~3%

Q18. What types of overhead might we pay in general for request replication?

  • Higher load on the servers serving the requests
  • Client-side cost of making the replicated queries
  • Additional network load from the additional requests and responses
  • All of the above

Q19. Which of the following methods can applications themselves use to deal with TCP incast? (Check all that apply)

  • Ignore one or more late responses, thereby being more tolerant to latency variations
  • Deliberately space out requests and responses to reduce the chances of incast
  • Show prompts asking the network operator to install larger switch buffers
  • Limit the size of individual responses so switches can accommodate more simultaneous responses in their buffers

Q20. You see the below scenario in a typical adaptive bit-rate video streaming application. The graph in the top half shows your running estimate of network capacity. The bottom half shows the streaming buffer with data flowing in from the network at the left and playing out to the screen on the right. In the marked time slot, you note that the estimated capacity is smaller than the current bit-rate. How does a traditional ABR streaming application react?

  • It requests a chunk at a reduced bit-rate.
  • It requests a chunk at an increased bit-rate.
  • It requests a chunk at the same bit-rate.

Q21. Now suppose that in the above scenario, you were using the buffer-based algorithm described in the lecture (ignoring for a moment the fact that that algorithm is unlikely to have given you that sequence of bit-rates). How does the buffer-based algorithm react to the buffer occupancy in requesting the next chunk?

  • It requests a chunk at a small bit-rate, close to the minimum.
  • It requests a chunk at the same bit-rate.
  • It requests a chunk at a large bit-rate, close to the maximum, because the buffer occupancy is high.

Cloud Networking Week 5 Quiz Answers

Quiz 1: Programming Assignment 2 Survey (optional)

Q1. How long did Programming Assignment 2 take you to finish? (Please answer in number of minutes.)

The graph in the top half shows your running estimate of network capacity. The bottom half shows the streaming buffer with data flowing in from the network at the left and playing out to the screen on the right. In the marked time slot, you note that the estimated capacity is smaller than the current bit-rate.

Quiz 2: Programming Assignment 3 Survey (optional)

Q1. How long did Programming Assignment 3 take you to finish? (Please answer in number of minutes.)

CDN node close to the client terminates the client’s connection to the server and instead uses a persistent connection to the server to fetch all the data in one interaction with the server, which it then passes on to the client in the usual slow-start manner. The client thus experiences the same set of protocol and data messages but only with the close edge serve

Quiz 3: Programming Assignment 4 Survey (optional)

Q1. How long did Programming Assignment 4 take you to finish? (Please answer in number of minutes.)

10

Cloud Networking Course Review:

In our experience, we suggest you enroll in the Cloud Networking Course and gain some new skills from Professionals completely free and we assure you will be worth it.

I hope this Cloud Networking course is available on Coursera for free, if you are stuck anywhere between quiz or graded assessment quiz, just visit Networking Funda to get Cloud Networking Coursera Quiz Answers.

Conclusion:

I hope these Cloud Networking Coursera Quiz Answers would be useful for you to learn something new from this Course. If it helped you then don’t forget to bookmark our site for more Coursera Quiz Answers.

This course is intended for audiences of all experiences who are interested in learning about Data Science in a business context; there are no prerequisite courses.

Keep Learning!

Get all Course Quiz Answers of Cloud Computing Specialization

Cloud Computing Concepts, Part 1 Coursera Quiz Answers

Cloud Computing Concepts, Part 2 Coursera Quiz Answers

Cloud Computing Applications, Part 1: Cloud Systems and Infrastructure Quiz Answers

Cloud Computing Applications, Part 2: Big Data and Applications in the Cloud Quiz Answers

Cloud Networking Coursera Quiz Answers

Leave a Reply

Your email address will not be published.

error: Content is protected !!