Most enterprise WAN decisions look straightforward on paper. Replace expensive MPLS with something cheaper and more flexible. Add cloud breakout. Simplify branch operations. In practice, the decisions that sit underneath those objectives — topology, transport mix, underlay design, management model — determine whether the deployment delivers on its promise or just shifts complexity around.
This article covers the architecture, performance, and design considerations that matter when evaluating managed SD-WAN for a distributed enterprise environment.
What managed SD-WAN actually means
The control plane sits in software, not hardware. That means a network policy set in a central console propagates to every edge device in the estate within minutes — and traffic is rerouted around failures without a technician touching anything. The physical transport underneath — MPLS, broadband, fiber, 4G/5G — becomes interchangeable. The platform decides which path gets which traffic, based on real-time conditions and defined priorities.
Managed SD-WAN means a service provider takes responsibility for designing, deploying, monitoring, and operating that infrastructure on your behalf. You retain visibility and policy input; the provider handles day-to-day operations, SLA enforcement, troubleshooting, and performance tuning.
According to Enterprise Management Associates research, more than 66% of enterprises were consuming SD-WAN as a managed service by 2023, up from 62% in 2020. Network assurance was the primary driver — organizations want defined SLAs and a single accountable party when performance degrades.
Architecture models
There are four main SD-WAN architecture patterns. Each makes different trade-offs between performance, complexity, cost, and cloud access.
1. Hub-and-spoke
Branch locations connect to one or more regional hubs, which handle routing, security inspection, and cloud access on their behalf. This is the most common model for organizations migrating from traditional MPLS, because it mirrors existing topology. Security controls are centralized, policy enforcement is consistent, and network management is simplified.
The limitation is latency. Traffic from a branch to a cloud application still transits the hub before breaking out to the internet, adding round-trip time. For latency-sensitive applications — voice, video, real-time collaboration — this matters across geographically distributed deployments. Hub-and-spoke also creates a concentration risk: hub failure or congestion affects all dependent branches.
2. Direct internet access (local breakout)
Branch traffic breaks out directly to the internet at each site, rather than backhauling to a central point. This reduces latency for cloud and SaaS applications significantly. For Microsoft 365, Salesforce, or similar cloud workloads, direct breakout can cut application response times substantially compared to backhauled architectures.
The operational challenge is security. Each internet breakout point is a potential attack surface. This model requires distributed security controls — either cloud-delivered security through a SASE framework, or a capable security stack at the edge device. Organizations that have consolidated security at central sites need to rethink that model when shifting to local breakout.
3. PoP-based (cloud-delivered) architecture
SD-WAN controllers and gateways are hosted in Points of Presence distributed across cloud regions. Branch traffic connects to the nearest PoP, which handles routing, traffic steering, and cloud onramp. This reduces the distance traffic travels and improves access to cloud resources.
Performance depends heavily on the density and quality of the PoP network. Providers with thin PoP coverage in certain regions may deliver worse performance than a well-designed hub-and-spoke deployment. Geographic coverage matters as much as the architecture itself.
4. Hybrid and multi-transport
Most enterprise deployments combine approaches: MPLS retained for sites where guaranteed performance is non-negotiable, broadband or fiber as primary transport for the majority of locations, 4G/5G as a backup or for sites where fixed connectivity is impractical. The SD-WAN overlay manages path selection across all of these simultaneously, shifting traffic based on real-time link quality.
This is the most common architecture for enterprises with large, geographically diverse site estates. It preserves existing MPLS investments where needed while capturing the cost and flexibility benefits of internet transport everywhere else.
Performance considerations
SD-WAN performance is not a single metric. The meaningful measures are application-level: what does a user at a branch office actually experience when running a latency-sensitive workload?
Application-aware routing
Modern SD-WAN platforms route traffic based on application type, not just destination IP. Voice and video calls are prioritized on the lowest-latency path. Large file transfers use available capacity without competing with real-time traffic. Policies define behavior; the platform enforces them continuously.
The quality of application-aware routing depends on the platform's ability to classify traffic accurately and react quickly when link conditions change.
Platforms that measure packet loss, latency, and jitter in real time — and reroute within milliseconds of degradation — maintain application performance during network events that would cause visible disruption on a traditional WAN.
Transport diversity and failover
A single broadband circuit, however fast, is a single point of failure. Managed SD-WAN typically deploys at least two transport paths per site — often MPLS plus broadband, or dual broadband from different providers — and fails traffic over automatically when one path degrades or fails. Sub-second failover is achievable on mature platforms; failovers that users notice are a configuration problem, not an inherent limitation.
Quality of Service in hybrid environments
When MPLS sits alongside internet transport, QoS enforcement becomes more complex. MPLS carries its own QoS markings; internet transport does not. The SD-WAN overlay has to manage application prioritization across both, ensuring that sensitive workloads receive appropriate treatment regardless of which path carries them at any given moment.
Voice-specific considerations
For enterprises running unified communications — Microsoft Teams Phone, Webex Calling, or similar platforms — SD-WAN performance directly affects call quality. Mean Opinion Score (MOS) is the standard measure. The parameters that affect it are jitter (variation in packet arrival time), packet loss, and one-way latency. A managed SD-WAN with proper QoS policies and direct internet breakout to Microsoft or Cisco network edges will consistently outperform an MPLS-backhauled architecture for cloud-delivered voice.
Design considerations
Underlay design
The SD-WAN overlay only performs as well as the underlay it runs on. Circuit selection, provider diversity, and local loop quality at each site have a direct bearing on what the overlay can achieve. Deploying a capable SD-WAN platform over a poorly designed underlay produces poor results. Part of a managed SD-WAN engagement should be underlay assessment and, where necessary, underlay remediation.
Security integration
SD-WAN is not a security product, but security decisions are inseparable from WAN architecture decisions. Local internet breakout requires a security response at the edge. Options include integrated next-generation firewalls within the SD-WAN platform, cloud-delivered security services (Secure Web Gateway, CASB, ZTNA) as part of a SASE model, or a combination. Organizations moving to SD-WAN should define their security model before finalizing their architecture — not after.
Zero Trust principles are increasingly standard in enterprise SD-WAN deployments. Identity-based policy enforcement means a user's access rights travel with them regardless of which site or network they connect from, rather than being determined by their physical location on the network.
Topology selection
Hub-and-spoke, full mesh, and partial mesh each have different implications for latency, redundancy, and management overhead. Hub-and-spoke is simpler to manage but creates latency for branch-to-cloud and branch-to-branch traffic. Full mesh eliminates transit latency but increases complexity at scale. Most enterprise deployments with 50+ sites use a regional hub model with local breakout for internet-bound traffic — capturing the simplicity of hub-and-spoke for data center traffic while avoiding the latency penalty for cloud applications.
Scale and zero-touch provisioning
For organizations with large branch estates, the operational cost of bringing up new sites matters as much as the architecture itself. Zero-touch provisioning — where a new edge device arrives at a branch, connects to the network, and self-configures based on centrally defined templates — is the difference between deploying ten sites a week and deploying ten sites a month. Evaluate and test this capability before committing to a platform or provider.
Observability and SLA management
A managed SD-WAN that does not provide clear, accessible reporting on link quality, application performance, and SLA compliance is difficult to hold accountable. Define what reporting you need before selecting a provider. Application-level performance data — not just interface statistics — should be part of the service. The ability to drill down from a reported performance issue to the specific site, transport path, and time period is what separates operational monitoring from reporting theater.
SASE readiness
SASE (Secure Access Service Edge) converges SD-WAN with cloud-delivered security services under a single architecture. Most organizations are not deploying SASE from day one, but architectural decisions made today should not foreclose the option. SD-WAN platform choices, security vendor relationships, and cloud connectivity design all affect how straightforwardly an organization can move toward SASE when ready.
Managed vs. co-managed vs. DIY
Most enterprises choose a hybrid operations model: a managed service provider handles infrastructure, monitoring, and operations; the IT team retains visibility and control over policy. According to EMA research, approximately 58% of organizations prefer this hybrid approach over full outsourcing or fully in-house management.
Full outsourcing suits organizations whose IT teams lack WAN expertise or capacity. It delivers simplicity but requires high trust in the provider's operational quality. DIY suits organizations with large, expert networking teams and non-standard requirements that a managed service cannot accommodate.
The co-managed model requires clearly defined boundaries: what the provider owns, what the IT team owns, and how incidents are escalated and resolved. Ambiguity here creates gaps. The most common failure mode in co-managed arrangements is an incident that falls between provider and customer responsibility, where each assumes the other is handling it.
How Pure IP approaches managed WAN
Pure IP manages the underlay as well as the overlay — circuit selection, provider relationships, and fault management sit with us, not split across your team and a separate carrier. One contract, one support relationship, one SLA.
For organizations running Microsoft Teams Phone or other cloud-delivered voice platforms, WAN performance and voice quality are directly connected. We design and manage networks with that dependency in mind, ensuring the transport layer supports the application performance that cloud voice requires.
If you are evaluating a WAN refresh or considering a move to managed SD-WAN, talk to our team. We can assess your current environment, identify where your underlay or architecture is creating application performance problems, and design a solution that addresses them.
Talk to the Pure IP team about managed SD WAN