AZ305 Renew - Update
Question 1
You need to recommend an Azure SQL Database service tier that supports a 40-TB database. The solution must provide the fastest failover time.
Which service tier should you recommend?
- Business critical
- General Purpose
- Hyperscale
For a 40-TB database and the requirement for fastest failover time, the recommended Azure SQL Database service tier is:
✅ Hyperscale
Why Hyperscale?
- Hyperscale supports very large database sizes, far beyond 40 TB, scaling up to ~100 TB or more.
- Its architecture separates compute and storage, which allows very large databases without the size limits of other tiers.
- Hyperscale uses rapid replica provisioning and has fast failover because of isolated compute nodes and highly available replicas — much faster than General Purpose.
Comparison
| Feature | General Purpose | Business Critical | Hyperscale |
|---|---|---|---|
| Max DB Size | ~4 TB | ~4 TB | ~100+ TB |
| High Availability | Standard | Fastest among standard tiers | Fast failover via replicas |
| Failover Time | Moderate | Better | Fast — near instant |
| Ideal For | Most workloads | IO intensive | Very large DBs (40+ TB) |
Summary
- General Purpose: Not suitable for 40 TB (max ~4 TB).
- Business Critical: Better HA than General Purpose, but still limited by max size.
- Hyperscale: Supports 40 TB and provides the fastest failover time among options.
✅ Recommended: Hyperscale
If you want more detail on how failover works in Hyperscale or how pricing compares, I can explain!
Question 2
You are designing an application that requires table storage. You are evaluating whether to use Azure Table storage or Azure Cosmos DB for Table.
Which requirement requires the use of Azure Cosmos DB for Table instead of Azure Table storage?
- • A schemeless design
- • An SLA of 99.99%
- • Consumption-based pricing model
- • Multiple secondary replicas
The correct answer is:
✅ Multiple secondary replicas
Why this is the correct requirement
Here’s the breakdown of each option:
-
A schemeless design ❌ Supported by both Azure Table Storage and Azure Cosmos DB for Table.
-
An SLA of 99.99% ❌ Azure Table Storage already offers high availability SLAs (up to 99.99% with RA-GRS), so this alone does not require Cosmos DB.
-
Consumption-based pricing model ❌ Both services support usage-based pricing (transactions + storage).
-
Multiple secondary replicas ✅ Only Azure Cosmos DB for Table supports global distribution with multiple writable/readable replicas, automatic replication, and multi-region replication.
Key Difference
| Feature | Azure Table Storage | Azure Cosmos DB for Table |
|---|---|---|
| Schemeless | ✅ | ✅ |
| 99.99% SLA | ✅ | ✅ |
| Consumption pricing | ✅ | ✅ |
| Multiple secondary replicas | ❌ | ✅ |
Final Answer:
Multiple secondary replicas
Question 3
You need to ensure credit card numbers in an Azure SQL database are protected from exposure to non-privileged users. What should you use?
- Dynamic data masking
- SSL
- TLS
- Transparent data encryption (TDE)
The correct answer is:
✅ Dynamic data masking
Why this is correct
Dynamic Data Masking (DDM) is specifically designed to hide sensitive data (like credit card numbers) from non-privileged users at query time, while still allowing privileged users (e.g., admins) to see the real values.
It masks the data in results such as:
XXXX-XXXX-XXXX-1234
Why the others are wrong
-
SSL ❌ Protects data in transit (network encryption), not from user query access.
-
TLS ❌ Same as SSL (TLS is the modern version) → protects data in transit only.
-
Transparent Data Encryption (TDE) ❌ Encrypts data at rest (disk/storage). Users with DB access can still see full credit card numbers.
Summary
| Feature | Protects from non-privileged users seeing data |
|---|---|
| Dynamic Data Masking | ✅ YES |
| SSL / TLS | ❌ No (network only) |
| TDE | ❌ No (storage only) |
✅ Final Answer:
Dynamic data masking
If you want full security for credit card data in real-world systems, best practice is to combine:
- DDM (data exposure control)
- TDE (encryption at rest)
- TLS (encryption in transit)
- Role-based access control
- Column-level encryption / Always Encrypted
But for this question → Dynamic data masking is the correct choice.
Question 4
Your transactional database requirements exceed the maximum storage capacity of a single Azure SQL database in the General Purpose service tier.
What should you implement to resolve the capacity issue?
- elastic pools
- read scale-out
- sharding
The correct answer is:
✅ sharding
Why this is correct
Your problem is exceeding the maximum storage capacity of a single database in the General Purpose tier. That’s a scale-out storage problem, not performance or read-scaling.
Sharding solves this by:
- Splitting data across multiple databases
- Each shard stores a subset of the data
- Total capacity scales horizontally
Why the others are wrong
-
Elastic pools ❌ Share compute/resources across databases, but do not increase max size of a single database.
-
Read scale-out ❌ Adds read-only replicas for performance, not storage capacity.
Summary
| Option | Solves storage capacity limit? |
|---|---|
| Elastic pools | ❌ |
| Read scale-out | ❌ |
| Sharding | ✅ |
✅ Final Answer:
sharding
This is a classic Azure exam pattern:
Storage limit exceeded → Sharding Performance issue → scale-up / read scale-out Cost optimization across DBs → elastic pools
Question 5
You are designing an application that will store data in Azure Blob Storage.
You need to recommend a solution to ensure that data in the storage account is encrypted by using customer-managed keys.
What should you include in the recommendation?
- Azure Key Vault
- Azure Private Link
- Service endpoint
- Shared access signature
The correct answer is:
✅ Azure Key Vault
Why this is correct
To use customer-managed keys (CMK) for encrypting data in Azure Blob Storage, the encryption keys must be stored and managed in Azure Key Vault.
Azure Storage encryption with CMK works by:
- Storing the encryption key in Azure Key Vault
- Configuring the storage account to use that key for encryption-at-rest
- Giving the storage account access to the Key Vault via managed identity
Why the others are wrong
-
Azure Private Link ❌ Secures network access, not encryption key management.
-
Service endpoint ❌ Network routing/security feature, not encryption.
-
Shared access signature (SAS) ❌ Access control mechanism, not encryption.
Summary
| Option | Required for CMK encryption |
|---|---|
| Azure Key Vault | ✅ |
| Azure Private Link | ❌ |
| Service endpoint | ❌ |
| SAS | ❌ |
✅ Final Answer:
Azure Key Vault
This is a very common Azure exam concept:
Microsoft-managed keys → default Customer-managed keys → Azure Key Vault
Question 6
You are designing a storage solution for a customer that will use Azure Storage. You are evaluating whether the customer needs a standard general purpose v2 storage account or a premium file shares storage account.
Which requirement requires the use of a premium files shares storage account?
- Support for Geo-zone-redundant storage (GZRS).
- Support for NFS file shares.
- Support for private endpoints.
- Support for SMB file shares.
The correct answer is:
✅ Support for NFS file shares
Why this is correct
Azure Premium File Shares are required when you need NFS (Network File System) support. Azure Files NFS v4.1 is only available on premium file shares, not on standard GPv2 storage accounts.
Option breakdown
-
Support for Geo-zone-redundant storage (GZRS) ❌ GZRS is supported by standard GPv2, not premium file shares.
-
Support for NFS file shares ✅ Only premium file shares support NFS v4.1.
-
Support for private endpoints ❌ Supported by both standard and premium accounts.
-
Support for SMB file shares ❌ Supported by both standard and premium Azure Files.
Summary table
| Feature | Standard GPv2 | Premium File Shares |
|---|---|---|
| SMB | ✅ | ✅ |
| Private endpoints | ✅ | ✅ |
| GZRS | ✅ | ❌ |
| NFS | ❌ | ✅ |
✅ Final Answer:
Support for NFS file shares
This is a classic Azure exam rule:
Need NFS → Premium Azure Files SMB only → Standard GPv2 is sufficient
Question 7
You plan to implement Azure Files to host a transaction-heavy workload. You need to select the optimal tier for the workload to meet the following requirements:
- • Storage must run on solid-state drives (SSDs).
- • Costs must be minimized.
Which tier should you select?
- Cool
- Hot
- Premium
- Transaction optimized
The correct answer is:
✅ Premium
Why this is correct
Your requirements are:
- Must run on SSDs
- Costs must be minimized (within the constraint of SSD storage)
In Azure Files, only Premium file shares use SSD-based storage. All other tiers use HDD-based storage.
Tier comparison
| Tier | Storage Type | SSD | Transaction-heavy workload |
|---|---|---|---|
| Cool | HDD | ❌ | ❌ |
| Hot | HDD | ❌ | ❌ |
| Transaction optimized | HDD | ❌ | ❌ |
| Premium | SSD | ✅ | ✅ |
Even though Premium is more expensive than HDD tiers, it is the lowest-cost option that meets the SSD requirement — making it the correct choice.
✅ Final Answer:
Premium
This is a classic exam phrasing trick:
“Costs must be minimized” = minimize within the constraints SSD requirement forces Premium regardless of cost.
Question 8
You are planning the high availability for Azure Virtual Machines. Which scenario requires the use of an availability zone?
- You need redundance in the event a hardware component fails. A failure of a datacenter is acceptable.
- You need redundancy in the event a power outage affects a physical location. A regional failure is acceptable.
- You need the ability failover virtual machine between regions.
The correct answer is:
✅ You need redundancy in the event a power outage affects a physical location. A regional failure is acceptable.
Why this is correct
Availability Zones are designed to protect against datacenter-level failures (such as power, cooling, or network outages) within a single Azure region.
Each availability zone is:
- A separate physical location
- With independent power, cooling, and networking
- But still inside the same region
So they are ideal when:
- You want protection from local physical failures
- But regional failure is acceptable
Option breakdown
-
Hardware component failure; datacenter failure acceptable ❌ → Use Availability Sets, not Availability Zones
-
Power outage at a physical location; regional failure acceptable ✅ → This is exactly what Availability Zones are for
-
Failover between regions ❌ → Requires Azure Site Recovery / geo-replication, not availability zones
Summary
| Requirement | Correct Solution |
|---|---|
| Hardware failure only | Availability Set |
| Datacenter failure | Availability Zone |
| Regional failure | Cross-region DR |
✅ Final Answer:
You need redundancy in the event a power outage affects a physical location. A regional failure is acceptable.
This is a core Azure HA concept:
Availability Set → rack/hardware failure Availability Zone → datacenter failure Multi-region → regional failure
Question 9
You are planning to deploy an application that will use an Azure SQL database. The database will contain up to 100 TB of data. The solution should minimize cost.
Which Azure SQL database service tier should you implement?
- Business critical
- General purpose
- Hyperscale
The correct answer is:
✅ Hyperscale
Why this is correct
Your requirements are:
- Database size up to 100 TB
- Minimize cost
Only Hyperscale supports databases at this scale (up to ~100 TB+). Both General Purpose and Business Critical are limited to only a few TB per database, so they cannot meet the size requirement at all.
Even though Hyperscale isn’t the cheapest tier in general, it is the only tier that supports 100 TB, making it the lowest-cost viable option that actually satisfies the requirements.
Tier comparison
| Tier | Max DB Size | Meets 100 TB? | Cost |
|---|---|---|---|
| General Purpose | ~4 TB | ❌ | Lower |
| Business Critical | ~4 TB | ❌ | Higher |
| Hyperscale | ~100+ TB | ✅ | Scales, pay for what you use |
Key exam principle
If the size requirement exceeds GP/BC limits → Hyperscale is mandatory, regardless of cost wording.
✅ Final Answer:
Hyperscale
Question 10
You have on-premises datacenter and an Azure subscription. The Azure subscription contains an Azure Virtual Netowrk.
You need to minimize latency between the datacenter and your Azure virtual network.
Which connectivity method should you recommend?
- a Point-to-Site (P2S) VPN connection
- a Site-to-Site (S2S) VPN connection
- an ExpressRoute circuit
The correct answer is:
✅ an ExpressRoute circuit
Why this is correct
Your goal is:
Minimize latency between on-premises datacenter and Azure VNet
-
ExpressRoute provides a private, dedicated connection between your on-premises network and Azure, bypassing the public Internet.
-
Much lower latency
- More consistent performance
- Higher bandwidth than VPN
Option breakdown
-
Point-to-Site (P2S) VPN ❌
-
Connects individual clients, not the entire datacenter
-
Over the Internet → higher latency
-
Site-to-Site (S2S) VPN ❌
-
Connects the on-premises network to Azure over IPSec VPN
-
Still goes over the Internet → higher latency than ExpressRoute
-
ExpressRoute ✅
-
Private, dedicated connection → lowest latency and most reliable
Summary
| Connectivity Type | Latency | Use Case |
|---|---|---|
| P2S VPN | High | Individual client access |
| S2S VPN | Medium | Site-to-site over Internet |
| ExpressRoute | Lowest | Dedicated, low-latency datacenter-to-Azure |
✅ Final Answer:
an ExpressRoute circuit
Question 11
Your company purchases an Azure subscription and plans to migrate several on-premises virtual machines to Azure.
You need to design the infrastructure required for the Azure Virtual Machines solution.
What should you first consider in the design?
- the names of the virtual machines
- the number of Azure Storage accounts
- the settings of the Azure virtual networks
- the size of the virtual machines
The correct answer is:
✅ the settings of the Azure virtual networks
Why this is correct
When designing an Azure Virtual Machines (VMs) infrastructure, the first step is to plan networking, because:
- Every VM must be connected to a virtual network (VNet).
- VNets determine IP addressing, subnetting, connectivity to on-premises networks, and network security.
- The size, placement, and connectivity of VMs depend on the VNet design.
Only after the network is planned can you decide:
- VM sizes
- Storage accounts
- Naming conventions
Option breakdown
-
The names of the virtual machines ❌ Naming is important but comes later, after network and sizing decisions.
-
The number of Azure Storage accounts ❌ Storage planning depends on VM count, disks, and throughput — secondary to networking.
-
The settings of the Azure virtual networks ✅ Primary consideration — networking underpins all VM placement, connectivity, and communication.
-
The size of the virtual machines ❌ VM sizing is based on workload requirements and performance, but you first need network and subnet design.
Summary
| Design Consideration | Design Order |
|---|---|
| Azure virtual networks | First |
| VM sizes | Later |
| Storage accounts | Later |
| VM names | Last |
✅ Final Answer:
the settings of the Azure virtual networks
Question 12
Your company plans to migrate an ASP.NET website to an Azure App Service named App1.
You need to design a solution for App1 that meets the following requirements:
- • Provides a development environment and a production environment
- • Can instantly swap environments without downtime
What should you include in the design?
- a load balancer and two virtual machines
- two Azure Storage accounts
- two deployment slots
- two snapshots
The correct answer is:
✅ two deployment slots
Why this is correct
Your requirements:
- Development and production environments
- Instant swap with no downtime
Azure App Service deployment slots are designed for this exact scenario:
- You can create multiple slots (e.g., staging and production) under the same App Service.
- Swap slots instantly to move the staging/development version into production without downtime.
- Slots share the same configuration, SSL certificates, and App Service Plan.
Option breakdown
-
A load balancer and two virtual machines ❌
-
This is for IaaS VM setups, not App Service (PaaS).
-
Does not provide slot-based swap capability.
-
Two Azure Storage accounts ❌
-
Storage accounts are for data, not app deployment environments.
-
Two deployment slots ✅
-
Provides dev/staging and production environments
-
Enables instant swap without downtime
-
Two snapshots ❌
-
Snapshots are for backups, not active environment swapping.
Summary
| Requirement | Solution |
|---|---|
| Dev + Prod environments | Deployment slots |
| Instant swap, no downtime | Deployment slots |
✅ Final Answer:
two deployment slots
Question 13
You are authoring a custom code that will process HTTP requests to perform basic calculations on geographic coordinates.
Requests will be infrequent.
You need to recommend a compute solution to host the custom code. Your solution must minimize the implementation effort and compute cost.
Which compute solution should you recommend?
- Azure API Management
- Azure Batch
- Azure Logic Apps
- Azure Functions
The correct answer is:
✅ Azure Functions
Why this is correct
Your requirements:
- Custom code that processes HTTP requests
- Requests will be infrequent
- Minimize implementation effort and compute cost
Azure Functions is ideal because:
- It is serverless → you don’t need to manage infrastructure.
- Event-driven → it can be triggered by HTTP requests.
- Consumption-based pricing → you only pay when the function runs, making it very cost-effective for infrequent requests.
- Minimal setup and implementation effort.
Option breakdown
-
Azure API Management ❌
-
Manages APIs, but does not run custom code.
-
Adds cost and complexity.
-
Azure Batch ❌
-
Designed for large-scale parallel and batch processing, not lightweight HTTP requests.
-
Azure Logic Apps ❌
-
Orchestrates workflows using prebuilt connectors; coding custom logic is more effort for simple calculations.
-
Azure Functions ✅
-
Perfect for HTTP-triggered, lightweight, infrequent tasks
- Minimizes cost and effort
Summary Table
| Solution | Suitable for HTTP requests? | Cost-effective for infrequent requests? | Effort |
|---|---|---|---|
| Azure Functions | ✅ | ✅ | Minimal |
| Azure API Management | ❌ | ❌ | High |
| Azure Batch | ❌ | ❌ | High |
| Azure Logic Apps | ⚠ | ⚠ | Moderate |
✅ Final Answer:
Azure Functions
Question 14
Your company plans to migrate an on-premises PHP web app named WebApp1 to Azure. You need to recommend which Azure service to use to run App1. The solution must minimize administrative effort.
Which service should you recommend?
- Azure App Service
- Azure Kubernetes
- Azure Logic Apps
- Azure Virtual Machines
The correct answer is:
✅ Azure App Service
Why this is correct
Your requirements:
- Run a PHP web app
- Minimize administrative effort
Azure App Service is ideal because:
- Fully managed Platform-as-a-Service (PaaS)
- Supports PHP natively
- Handles patching, scaling, and load balancing automatically
- Minimal infrastructure management compared to VMs or Kubernetes
Option breakdown
-
Azure App Service ✅
-
PaaS, fully managed, supports PHP, minimal admin effort.
-
Azure Kubernetes Service ❌
-
Requires containerizing the app, managing clusters, and more overhead. Not minimal effort.
-
Azure Logic Apps ❌
-
Used for workflow automation, not hosting web applications.
-
Azure Virtual Machines ❌
-
IaaS solution, requires managing OS, runtime, updates, and scaling → higher administrative effort.
Summary Table
| Service | Supports PHP | Administrative Effort | Suitable? |
|---|---|---|---|
| Azure App Service | ✅ | Minimal | ✅ |
| Azure Kubernetes Service | ✅ (with containers) | High | ❌ |
| Azure Logic Apps | ❌ | Minimal | ❌ |
| Azure Virtual Machines | ✅ | High | ❌ |
✅ Final Answer:
Azure App Service
Question 15
Your company is designing an application named App1 that will use data from Azure SQL Database. App1 will be accessed over the internet by many users.
You need to recommend a solution for improving the performance of dynamic SQL Database data retrieval by App1.
What should you include in the recommendation?
- a CDN profile
- Azure Cache for Redis
- Azure HPC cache
- ExpressRoute
The correct answer is:
✅ Azure Cache for Redis
Why this is correct
Your scenario:
- App1 uses dynamic data from Azure SQL Database
- Accessed over the internet by many users
- Goal: Improve performance of data retrieval
Azure Cache for Redis is designed to:
- Store frequently accessed or compute-heavy data in memory
- Reduce latency and database load
- Improve application responsiveness for dynamic content
This is exactly the use case for improving SQL Database performance in high-traffic apps.
Option breakdown
-
CDN profile ❌
-
Optimizes static content (images, CSS, JS) delivery over the internet, not dynamic SQL data.
-
Azure Cache for Redis ✅
-
In-memory cache for dynamic data
-
Reduces SQL database calls and improves response times
-
Azure HPC Cache ❌
-
Designed for high-performance computing workloads, not general SQL caching
-
ExpressRoute ❌
-
Provides private network connectivity between on-premises and Azure
- Does not improve SQL query performance over the public internet
Summary Table
| Solution | Optimizes SQL Database dynamic queries? | Suitable? |
|---|---|---|
| CDN profile | ❌ | No |
| Azure Cache for Redis | ✅ | Yes |
| Azure HPC Cache | ❌ | No |
| ExpressRoute | ❌ | No |
✅ Final Answer:
Azure Cache for Redis
Question 16
You are developing an application that will use a message-based delivery system. The solution must be able to store over 90-GB of data and provide an audit trail of all messages.
Which messaging solution should you use?
- Azure Functions
- Azure Queue Storage
- Azure Service Bus queues
- Azure Service Bus topics
The correct answer is:
✅ Azure Service Bus queues
Why this is correct
Your requirements:
- Message-based delivery system
- Store over 90 GB of data
- Provide an audit trail of all messages
Azure Service Bus queues are designed for:
- Enterprise messaging
- Large message storage (Standard tier supports up to 80 GB, Premium supports unlimited partitioned queues)
- Message auditing and tracking via dead-letter queues and message properties
It supports transactional messaging, duplicate detection, and message ordering — features Azure Queue Storage does not provide.
Option breakdown
-
Azure Functions ❌
-
Serverless compute, not a messaging solution.
-
Azure Queue Storage ❌
-
Simple queue for lightweight messaging
-
Max queue size per storage account is ~200 TB, but it does not provide rich auditing or duplicate detection.
-
Azure Service Bus queues ✅
-
Enterprise-grade messaging
-
Supports message audit trails, sessions, transactions, and larger storage (with partitioned queues in Premium tier)
-
Azure Service Bus topics ❌
-
Also enterprise-grade, but used for publish-subscribe scenarios, not simple point-to-point queueing.
Summary Table
| Solution | Supports >90 GB | Audit Trail | Use Case |
|---|---|---|---|
| Azure Functions | ❌ | ❌ | Compute, not messaging |
| Azure Queue Storage | ✅ (with storage account limit) | ❌ | Simple queues only |
| Azure Service Bus queues | ✅ | ✅ | Enterprise messaging with auditing |
| Azure Service Bus topics | ✅ | ✅ | Pub-sub scenarios |
✅ Final Answer:
Azure Service Bus queues
Question 17
You are designing a music-sharing application in Azure.
You need to recommend an application design that meets the following requirements:
- • The application must notify users when new music is uploaded to Azure.
- • Users must be able to decide when they want to access new music.
What should you recommend using to route the required information for new music?
- Azure Event Grid
- Azure Event Hubs
- Azure Monitor
- Azure Queue Storage
The correct answer is:
✅ Azure Event Grid
Why this is correct
Your requirements:
- Notify users when new music is uploaded → event-driven notification
- Users decide when to access new music → asynchronous, decoupled delivery
Azure Event Grid is designed for event-based architectures:
- Routes events from event sources (like Blob Storage uploads) to subscribers (like your app or other services)
- Enables serverless, push-based notifications
- Allows users or services to react to events asynchronously, fulfilling the “decide when to access” requirement
Option breakdown
-
Azure Event Grid ✅
-
Event-based notification system
- Pushes events to subscribers
-
Perfect for notifying about new uploads
-
Azure Event Hubs ❌
-
Designed for high-throughput telemetry and streaming data
-
Not for per-event notifications to individual subscribers
-
Azure Monitor ❌
-
Observability tool for metrics and logs
-
Not for application event routing
-
Azure Queue Storage ❌
-
Message queueing system
- Pull-based; not push notifications and less suited for real-time event routing
Summary Table
| Service | Use Case | Suitable? |
|---|---|---|
| Azure Event Grid | Event notification / publish-subscribe | ✅ |
| Azure Event Hubs | Telemetry streaming | ❌ |
| Azure Monitor | Logging and metrics | ❌ |
| Azure Queue Storage | Simple message queue | ⚠ (pull-based, not real-time notification) |
✅ Final Answer:
Azure Event Grid
Question 18
Your company has a production line that uses several hundred sensors.
You are designing a solution that will ingest the sensor data by using Azure Event Hubs, and then use Azure Stream Analytics to analyze it for anomalies.
You need to recommend a location to store the data ingested by Azure Event Hubs. The solution must minimize the cost.
Which location should you recommend?
- Azure Blob storage
- Azure Cosmos DB
- Power BI
- Azure SQL Database
The correct answer is:
✅ Azure Blob storage
Why this is correct
Your scenario:
- Hundreds of sensors producing streaming data → ingested via Azure Event Hubs
- Stream Analytics analyzes the data for anomalies
- Requirement: minimize cost for storing large amounts of raw sensor data
Azure Blob Storage is ideal because:
- Optimized for large-scale, cost-effective storage of unstructured data
- Can store raw or historical sensor data for long-term retention
- Supports integration with Stream Analytics as an output or for archival
- Much cheaper than Azure Cosmos DB or Azure SQL Database for high-volume data
Option breakdown
-
Azure Blob Storage ✅
-
Cost-effective for large, raw data
-
Works well with Event Hubs + Stream Analytics pipelines
-
Azure Cosmos DB ❌
-
Expensive for large-volume storage
-
Designed for low-latency, globally distributed access, not bulk ingestion
-
Power BI ❌
-
Visualization tool, not a storage solution
-
Azure SQL Database ❌
-
Relational database
- More expensive, not optimized for high-volume streaming sensor data
Summary Table
| Storage | Suitable for high-volume sensor data? | Cost-effectiveness |
|---|---|---|
| Azure Blob Storage | ✅ | ✅ |
| Azure Cosmos DB | ⚠ | ❌ |
| Power BI | ❌ | ❌ |
| Azure SQL Database | ❌ | ❌ |
✅ Final Answer:
Azure Blob storage
Question 19
You need to design network connectivity for a subnet in an Azure Virtual Network. The subnet will contain 30 virtual machines.
The virtual machines will establish outbound connections to internet hosts by using the same a pool of four public IP addresses.
Inbound connections to the virtual machines will be prevented.
What should include in the design?
- Azure Private Link
- Azure Virtual WAN
- NAT Gateway
- User Defined Routes
The correct answer is:
✅ NAT Gateway
Why this is correct
Your scenario:
- Subnet with 30 VMs
- Outbound connections to the internet using a fixed pool of 4 public IP addresses
- Inbound connections must be blocked
Azure NAT Gateway is designed for exactly this:
- Provides outbound-only internet connectivity for all VMs in a subnet
- Allows you to specify a static pool of public IP addresses
- Blocks inbound traffic by default, which matches your requirement
Option breakdown
-
Azure Private Link ❌
-
Provides private access to Azure PaaS services
-
Not used for outbound internet connectivity
-
Azure Virtual WAN ❌
-
Designed for global networking, branch-to-Azure connectivity, and VPN/ExpressRoute
-
Overkill for outbound internet from a single subnet
-
NAT Gateway ✅
-
Provides managed, outbound-only NAT
- Supports IP address pools
-
Blocks inbound connections
-
User Defined Routes (UDR) ❌
-
Can control routing, but does not provide NAT or outbound IP translation by itself
Summary Table
| Requirement | Solution |
|---|---|
| Outbound connections with a fixed IP pool | NAT Gateway |
| Inbound connections blocked | NAT Gateway |
| Private connectivity to PaaS | Private Link |
| Custom routing | UDR |
| Global branch connectivity | Virtual WAN |
✅ Final Answer:
NAT Gateway
Question 20
You are designing a virtual networking solution for an Azure subscription.
You intend to deploy virtual machines across three availability zones in the East US and West US Azure regions.
You need to recommend the minimum number of virtual networks that must be created for the planned virtual networking solution.
What should recommend?
- 1
- 2
- 3
- 6
The correct answer is:
✅ 2
Why this is correct
Your scenario:
- You want to deploy VMs across three availability zones in two regions: East US and West US
- Requirement: minimum number of virtual networks (VNets)
Azure rules:
- VNets are regional → A single VNet cannot span multiple regions
- Availability zones exist within a region → Multiple AZs can share the same VNet in that region
Implications for your design:
- East US → 1 VNet spanning 3 availability zones
- West US → 1 VNet spanning 3 availability zones
✅ Minimum total VNets = 2
You do not need separate VNets for each availability zone.
Option breakdown
| Option | Correct? | Reason |
|---|---|---|
| 1 | ❌ | VNets cannot span multiple regions |
| 2 | ✅ | 1 VNet per region, spans AZs in that region |
| 3 | ❌ | Not needed; over-provisioning |
| 6 | ❌ | Only needed if you created 1 VNet per AZ in each region (unnecessary) |
Summary
- VNets are regional
- Availability zones can share a VNet
- Minimum VNets = 1 per region → 2 total
✅ Final Answer:
2
Question 21
You are designing an application that will be deployed on Azure Virtual Machines. The deployment will consist of one virtual network and three subnets.
All traffic between subnets will be inspected by a firewall appliance deployed on one of the subnets.
Which component should you include in the design to ensure traffic is inspected by the firewall appliance?
- Application security groups
- Azure Virtual WAN
- NAT gateways
- User defined routes
The correct answer is:
✅ User defined routes (UDRs)
Why this is correct
Your scenario:
- One virtual network with three subnets
- All traffic between subnets must be inspected by a firewall appliance
To force traffic to flow through the firewall, you need custom routing:
- User Defined Routes (UDRs) allow you to specify a next hop for traffic leaving a subnet
- You can configure the UDR so that traffic between subnets goes through the firewall subnet before reaching its destination
- This ensures inspection of all inter-subnet traffic
Option breakdown
-
Application security groups (ASGs) ❌
-
Used for grouping VMs for security policies
-
Does not control routing
-
Azure Virtual WAN ❌
-
Provides hub-and-spoke networking and global connectivity
-
Overkill for controlling inter-subnet traffic in a single VNet
-
NAT gateways ❌
-
Provides outbound internet connectivity
-
Does not inspect or route internal traffic
-
User defined routes (UDRs) ✅
-
Controls next-hop routing between subnets
- Can route traffic via firewall appliance
Summary Table
| Requirement | Solution |
|---|---|
| Force traffic between subnets through firewall | User Defined Routes (UDRs) |
| Group VMs for policies | ASGs |
| Outbound internet NAT | NAT gateways |
| Global network hub | Azure Virtual WAN |
✅ Final Answer:
User defined routes (UDRs)
Question 22
Your company has five branch offices on five continents.
You plan to connect each branch office to the closest Azure region. All the branch offices will use virtual machines in the same region only.
You need to design a virtual network infrastructure for the company. The solution must ensure that Remote Desktop connections to virtual machines can be protected by using Azure Bastion.
What is the minimum number of virtual networks that must be included in the design?
- 1
- 5
- 6
- 10
The correct answer is:
✅ 5
Why this is correct
Your scenario:
- Five branch offices on five continents
- Each branch office connects to the closest Azure region
- VMs stay in the same region as the branch
- Requirement: RDP access must be protected by Azure Bastion
Azure rules:
- Virtual Networks (VNets) are regional → you need at least one VNet per region where VMs are deployed.
- Azure Bastion is deployed per VNet → it provides secure RDP/SSH access to all VMs in that VNet.
- Multiple branch offices do not need separate VNets if their VMs are in the same region.
Implication:
- Each branch office uses VMs in a different Azure region → requires 1 VNet per region
- There are 5 branch offices in 5 regions → minimum 5 VNets
One VNet per region satisfies both VM placement and Azure Bastion access requirements.
Option breakdown
| Option | Correct? | Reason |
|---|---|---|
| 1 | ❌ | VNets are regional; cannot use 1 VNet for 5 regions |
| 5 | ✅ | 1 VNet per region (5 total) |
| 6 | ❌ | Over-provisioning; not needed |
| 10 | ❌ | Over-provisioning; not needed |
Summary
- Minimum VNets = 1 per Azure region hosting VMs
- Bastion is deployed per VNet → ensures secure RDP/SSH access
✅ Final Answer:
5
Question 23
You plan to migrate an on-premises Apache Spark deployment to a managed Apache Spark service in Azure. To which Azure service can you migrate the on-premises deployment?
- Azure Analysis Services
- Azure Data Explorer
- Azure Databricks
- Azure Stream Analytics
The correct answer is:
✅ Azure Databricks
Why this is correct
Your scenario:
- On-premises Apache Spark deployment
- Goal: migrate to a managed Spark service in Azure
Azure Databricks is:
- A fully managed Apache Spark platform
- Provides big data analytics, AI/ML workflows, and ETL processing
- Compatible with existing Spark jobs and libraries
- Integrates with Azure Storage, Data Lake, SQL, and Synapse
Option breakdown
-
Azure Analysis Services ❌
-
Managed OLAP (multidimensional) analytics, not Spark
-
Azure Data Explorer ❌
-
Optimized for log and telemetry analytics, not general Spark workloads
-
Azure Databricks ✅
-
Managed Spark service, suitable for migration from on-premises Spark
-
Azure Stream Analytics ❌
-
Real-time streaming analytics service; not a general Spark platform
Summary Table
| Service | Spark-compatible | Use Case |
|---|---|---|
| Azure Analysis Services | ❌ | OLAP analytics |
| Azure Data Explorer | ❌ | Log/telemetry analytics |
| Azure Databricks | ✅ | Managed Spark cluster, ETL, ML, big data |
| Azure Stream Analytics | ❌ | Real-time streaming |
✅ Final Answer:
Azure Databricks
Question 24
You are designing a solution to analyze data ingested from loT devices in Azure loT hub. The data will be analyzed for anomalous conditions and then the data will be visualized in Power Bl in near real-time. Which service should you recommend to analyze the data from loT Hub?
- Azure Data Factory
- Azure Data Lake
- Azure Event Hubs
- Azure Stream Analytics
The correct answer is:
✅ Azure Stream Analytics
Why this is correct
Your scenario:
- IoT devices send data to Azure IoT Hub
- Data must be analyzed for anomalies
- Data must be visualized in Power BI in near real-time
Azure Stream Analytics is ideal because:
- Real-time stream processing service
- Can ingest data directly from IoT Hub
- Detect anomalies, filter, aggregate, and transform data in near real-time
- Can output results directly to Power BI, Azure SQL, Blob Storage, etc.
Option breakdown
-
Azure Data Factory ❌
-
Orchestrates ETL and batch workflows
-
Not optimized for real-time stream processing
-
Azure Data Lake ❌
-
Storage for big data analytics
-
Does not analyze or process streams in real-time
-
Azure Event Hubs ❌
-
Event ingestion and buffering
-
Does not process or analyze data
-
Azure Stream Analytics ✅
-
Designed for real-time analytics of streaming data
- Direct integration with IoT Hub and Power BI
Summary Table
| Service | Real-time analytics | IoT Hub integration | Power BI output |
|---|---|---|---|
| Azure Data Factory | ❌ | ❌ | ❌ |
| Azure Data Lake | ❌ | ❌ | ❌ |
| Azure Event Hubs | ❌ | ✅ | ❌ |
| Azure Stream Analytics | ✅ | ✅ | ✅ |
✅ Final Answer:
Azure Stream Analytics
You need to design a data storage solution that meets the following requirements:
• Support JSON and CSV files. • Provide a hierarchical namespace. • Supports Apache Hadoop Distributed File System (HDFS) as the data access layer.
Which data storage solution should you recommend?
- Azure Blob Storage
- Azure Cosmos DB for Apache Gremlin
- Azure Data Lake
- Azure SQL Database
The correct answer is:
✅ Azure Data Lake
Why this is correct
Your requirements:
- Support JSON and CSV files → unstructured/semi-structured data
- Provide a hierarchical namespace → folder-like structure for large datasets
- Supports HDFS as the data access layer → integration with Hadoop/Spark
Azure Data Lake Storage Gen2 (built on top of Azure Blob Storage) is designed for:
- Big data analytics workloads
- Hierarchical namespace → directories and files
- HDFS compatibility → can be accessed by Hadoop, Spark, and other analytics frameworks
- Storing JSON, CSV, Parquet, and other file formats
Option breakdown
-
Azure Blob Storage ❌
-
Stores unstructured data
-
Does not provide hierarchical namespace by default (without Data Lake Gen2 features)
-
Azure Cosmos DB for Apache Gremlin ❌
-
Graph database for connected data
-
Not designed for large files, JSON/CSV storage, or HDFS
-
Azure Data Lake ✅
-
Fully meets all requirements: JSON/CSV, hierarchical namespace, HDFS-compatible
-
Azure SQL Database ❌
-
Relational database, not optimized for file storage or hierarchical big data
Summary Table
| Requirement | Azure Data Lake | Azure Blob | Cosmos DB (Gremlin) | Azure SQL DB |
|---|---|---|---|---|
| JSON/CSV support | ✅ | ✅ | ✅ (limited) | ✅ |
| Hierarchical namespace | ✅ | ❌ | ❌ | ❌ |
| HDFS support | ✅ | ❌ | ❌ | ❌ |
✅ Final Answer:
Azure Data Lake