When thinking of the network we need to assess against value and not just cost.
FinOps doesn’t, at this point, look at the network as a target for optimisation in Public Cloud. But should it?
Obviously, as there are benefits to be gained from doing this, the answer is ‘yes!’
I’ll reference Azure as that’s the Public Cloud of choice for most of the clients I work with. Taking the well architected framework and an Enterprise Landing Zone, the network architecture is hub and spoke where all traffic to or from a spoke traverses the hub. It’s simple, scalable and effective. Yet you can end up paying for more network traffic flows than is necessary.
For example, all management, monitoring and logging traffic will be charged for exit of spoke, input to hub, exit of hub, and input to the Management spoke. Alternatively, create a pattern where all such traffic (and only this traffic) goes direct from origin spoke to Management spoke. A single pattern applied to all spokes. Cost is then halved for this traffic.
A second example – Azure vWAN. A recent exercise we conducted for a client reduced network charges by just shy of 30% by moving to the vWAN service.
A final example is where a client was backing up Private Cloud data to Azure. In the hub and spoke architecture all those TBs of back up traffic would incur hub exit + spoke ingress charges. However, by terminating the back up traffic directly into the relevant spoke saved the client unnecessary network charges.
A hub and spoke network is the best architecture for Public Cloud. But this can be optimised to minimise traffic costs through adopting the correct services, together with repeatable and predictable patterns for specific traffic types.
Within a Private Cloud, the network is driven primarily by required bandwidth. With high density compute, the number of servers diminishes but the uplink speeds from servers now operate at 100Gbps, or higher. Obviously, then, the network becomes optimised from a port count perspective by the required number of ports reducing, although at higher speeds.
Optimisation can be achieved by consolidating SAN and LAN onto the same infrastructure by leveraging iSCSI. However, NVMe over Fabrics provides tangible performance benefits that should be assessed. Depending on the end-to-end support of the available NVMe-oF protocols, a dedicated SAN could be the best option as it will provide application performance benefits.
Then there’s AI. Of course! AI architectures have very specific requirements for the network and depending on the scale, will require a dedicated network.
Where optimisations can also be achieved is through the adoption of VXLAN Fabrics that have become the de-facto architecture for DC LANs. With a standard architecture, operations, support, and visibility are optimised. Furthermore, the acquisition of talent becomes easier and vendors are able to provide enhanced features to simplify management, operations and visibility.
Adopting industry standard technologies – in this case VXLAN – gives the business power of choice and ease to move vendors if required. However, the devil is in the detail. For example, the intelligent buffers in Cisco Nexus switches have been proved to provide a tangible application performance benefit and is a key reason trading houses we’re familiar with use them.
A key take away with any optimisation program is to have ‘value’ top of mind.
Yes, you can create patterns for all logging traffic to go direct from spoke nets to the Management spoke but is the overall cost saving and effort required to do this worth it?
Yes, you can consolidate Private Cloud SAN and LAN networks using iSCSI but would the performance of a separate FC NVMe-oF SAN network provide substantial benefits?
Always assess against value and optimise for value, not just cost.
Want to understand more how Natilik can help you on your FinOps and optimised cloud journey? Contact us today.
Nigel Pyne
Principal Architect, Natilik
Return to Resources