Cloud Right, not Cloud First: Avoiding the pitfalls

Cloud Right, not Cloud First: Avoiding the pitfalls

Cloud – we’ve all heard of it, and we are probably all using it in some form now – Office 365, Salesforce, MYOB are some of the many Software as a Service (SaaS) products that are “in the cloud” today.

However, there is still some ambiguity over what is cloud computing, how do Public Cloud, Private Cloud and Hybrid Cloud differ, and how secure is it.

We define cloud computing as having the following five attributes: (1) on-demand self-service, (2) broad network access, (3) resource pooling, (4) rapid elasticity, and (5) measured services (pay as you go); for compute, network and storage.

Options in cloud compute solutions have extended to include customer-dedicated solutions, such as HCI (Hyper-Converged Infrastructure), that are modular, elastic and relatively quick to set up – all characteristics of Cloud compute, but are also dedicated and “in house”. However, customer dedicated solutions may lack some other features of Cloud computing. HCI is categorised under the Private Cloud umbrella solution, although it requires some of the self-service portals.

Looking at the abundance of solutions available for businesses today, choosing what the right solution is can be complicated and daunting – but the alternative is worse: stagnation.

Cloud Right, not Cloud First, is therefore imperative. The days of jumping both feet first into Public Cloud, no matter what circumstances, is so yesteryear!

Cloud-First is flexing to Cloud Right. Instead of deploying solutions by blindly following a Cloud-First corporate policy, we now select solutions based on what offers the best overall outcome: Cloud Right. 

When you are considering your journey to the cloud, you need to look at your companies’ short, medium, and long term requirements. Asking yourself the following questions is a great starting point:

  • Is your current platform urgently in need of a refresh? 
  • Do you develop or redevelop solutions in-house? 
  • Do you utilise SaaS offerings? 
  • Do you need or want to have dedicated hardware? 
  • Do you want managed infrastructure or do you want to do it yourself? 
  • How much effort do you want to put into managing your compute? 
  • Are you growing IT? 
  • Is a Capex or Opex billing model preferred? 
  • Are there any regulatory or compliance requirements that need to be met? 
  • What Disaster Recovery requirements exist? How long are backups required to be retained? How quick is it to recover from a backup?

Sometimes finding the answers to these questions can be difficult. Too often, people have their blinkers on and cannot see the trees from the forest.  Oreta can help. Oreta can assist your organisation prior, during,  and after your journey to the cloud.

During Phase 1, the discovery and assessment stages,  Oreta conducts a ’Journey to Cloud’ workshop, where Oreta’s cloud architects use special analytical tools to help answer the questions above, and tailor a plan that will best meet your organisation’s needs.

A plan to proceed to the right cloud can then lead to a Cloud Blueprint, a build and first-mover process which will give you a quick step into the cloud for a low-risk application.

However, during Phase 2, you should consider some of the risks you may encounter, particularly if you have opted to move to a public cloud. A public cloud risks being open to the Internet, not just for you but for the rest of the world. Two critical pillars stand together to protect your public cloud: Security and Network.

Security of your data in the cloud is paramount and complex. Companies run with services from different clouds (e.g. Salesforce and Office 365 and IaaS compute etc). And different services have different security responsibility thresholds – for example, security a SaaS service is different to a cloud SQL PaaS and IaaS compute, let alone if these are from different cloud service providers. To secure your cloud you need to ensure your security policies are enforced consistently and corporate wide. A big challenge. CASB (Cloud Access Security Broker) tools protect your SaaS services and CSPM (Certified Security Project Manager) tools protect your IaaS and PaaS public cloud services. It is vital to configure your organisation’s CASB and CSPM correctly from the start to ensure your compute services are as secure as you believe they are. Incorrect setup of these can give you a false sense of security and lead to catastrophic outcomes.

In recent years, the way networks have been deployed has changed to meet the new cloud compute requirements companies have. Flexible and cheaper internet-based links that are software-defined (SD-WAN) have replaced traditional private Multiprotocol Label Switching (MPLS) networks. SD-WAN enables flexible network options to meet your business requirements. Whether provided by Telstra or NBN, a secure network is still required to ensure only authorised people can connect to your systems. SD-WAN delivers on all of these requirements. Oreta specialises in networks and SD-WAN.

So Oreta is the logical solution when you are seeking a service provider that can cater to all your future compute services’ needs – cloud, network and security.

Oreta specialises in ensuring you adopt the cloud that is right for your organisation, connecting your organisation’s internal users to its compute over secure networks, and securing your cloud provided services from possible breaches.

Oreta’s top priority is always to do what is right for the customer.

Cloud Right, not Cloud First, with Oreta.


Accelerate your journey with HCI technologies

Accelerate your journey with HCI technologies

Written by Andrew Jones, Head of Delivery. ORETA.

Congratulations! You’ve decided to accelerate your cloud journey by adopting hyperconverged technologies. Understandably, the decision forms part of our hybrid cloud strategy and may have been a difficult one to make. But you did it. Now it’s time to hit the ground running.

In this blog, we will provide you with several insights into what to do now and recommendations on what you should consider when deploying HCI technologies. For reference purposes, I will refer to Cisco Hyperflex HCI but, please note the problem statements are similar across most HCI architectures.

Before you travel too far, make sure you engage with a partner or similar organisation which has travelled the road before, taken necessary detours and knows where the speed bumps are – they will tell you it’s not all smooth sailing.

Foundation Platform

  1. Selection
    Select a data centre provider that supports your SLA and business requirements. Consider appointing a broker of cloud services so that you can innovate and connect quickly to the ecosystem of cloud service providers. It is essential to have low latency secure connections. Equinix with its EQ Cloud Exchange has a great offering.
  2. Design and Sizing
    Traditionally, you were able to scale-out storage and compute separately. With HCI, the storage network and compute are tied to a node. Cisco offers a compute-only node, but for storage it comes with compute – so how do you cater for growth? Decide on the size of the node you require to support your current requirements now and for the next 12 months, while providing the flexibility to add additional ram and storage modules when or if required. Note: For HCI, this must be balanced across all nodes.
  3. Licensing Requirements
    Consider your Hypervisor, OS, Application and Database licensing requirements which may be tied to your processors or nodes. The cost analysis becomes a balancing act between nodes, CPU, RAM, storage and licencing, which should not be underestimated.
  4. Replication Factor
    With storage, you have a new paradigm where resiliency is managed by replication across nodes. The recommended replication factor is 3, which provides you with the flexibility to drop nodes for patching while retaining resiliency. You may elect for a DR site to have RF2 as a cost saving. Also, consider the deduplication and compression efficiencies you will get.
  5. Solid State Device (SSD)
    With the cost of SSD, you may prefer to store all your data onto a SSD or have a hybrid with a spinning disc. If you have VDI or demanding databasesthen SSD is the answer.
  6. Rack size
    Ensure your rack size supports the scale-out additional nodes. When finalising the sizing, remember that HCI has management overheads that you need to take into account when determining what you receive as usable for CPU, RAM and storage.
  7. Storage options
    Consider partnering with a service provider for boost capacity or long-term archive storage. Telstra Vintage Storage is an excellent product to use for this or maximise leverage to public cloud.
  8. Management Tools
    Will you use a traditional approach, with Hyper V or VMware, or a cloud-native approach which can support Cisco’s Hyperflex application or Google’s Anthos platform? When you decide on the hypervisor and management tools you will use, consider the skill sets of those who are responsible for managing the environment and what applications will need to be deployed on the platform.

Other foundational considerations:

The other foundational design requirements to be considered depends on where you are moving from. The following items may already be part of your criteria or need to be added;

Network Connectivity

How will your users connect to the platform? Do you need additional core switches, copper or fibre cabling or networking components, e.g. MPLS, SD-WAN, Dark Fibre, Cloud Exchange, QoS, WAN optmisation, Load Balancers, DNS or Public IP Address requirements.

Do you require physical or virtual appliances? What FW services do you require – e.g. Palo Alto, Checkpoint, etc.

Service Provider

Do you require DDOS, IPS/IDS? Then you need to consider a service provider – e.g. Telstra.

DR solution

Do you require a DR solution? If so, what are your RPO and RTO requirements which may affect your DR architecture?

Architecture

What replication technology and orchestration tools do you require – e.g. Is it a mirror HCI or replication to Azure? Your application and database architecture will also drive the solution. Therefore, you may want to leverage the native HCI replication tools with 5 min RPO or high availability groups with active database architectures for near real-time RPO and RTO.

You may want to leverage the public cloud with near real-time replication with tools such as Veeam and ASR.

Back up

Do you need a backup solution – e.g. Cohesity or Veeam? You will need to consider backing up data sizes. The rate of change will determine whether a dedupe appliance, such as Cohesity, is necessary. These days tools allow you to archive to archive to low-cost blob storages in public cloud, such as Google and Azure, rather than traditional tape. Alternatively, you can buy a ‘Backup as a Service’ (BaaS) – e.g. Telstra BaaS has a 7-year backup product. However, if you change your backup service, it is important that you consider how you will restore from previous archives.

Bill of Materials

Too often the delivery of the bill of materials is underestimated. You need to factor in the ‘random uncontrollable impacts’ that may occur. For example, COVID-19 has delayed manufacturing in China. Average times of manufacturing certain products has increased from several days to 2-6 weeks.

Many organisations are looking to move up the stack to focus on high-value activities innovating for business. HCI simplifies the stack for management and enables this to happen. However, before you deploy HCI make sure you ask yourself;

  • What support do you require?
  • Who will manage the foundational platform?
  • Do you need 24 x 7 or 8 x 5 support?

Oreta in partnership with CIsco can provide organisations with a managed service which can wrap over their platform and provide a highly resilient private cloud.

Without leveraging a cloud provider, it can take a further 4-6 weeks to move up the stack. Once the HCI kit arrives, you need to rack and stack, build the platform, configure and install the hypervisor, harden and test before you can hand it over to support.

Next step – Migrating your workloads

Now you have completed your HCI foundational design, ordered and installed it, you are ready to mitigate your workloads onto the platform.

Finally, if you are on-premise, you may have to consider;

  • Legacy architectures that may need to be transformed – e.g. RDM for your databases
  • Windows OS versions or DB versions that need upgrading to support new licencing models for the platform, etc.
  • Public IP addresses, if you are utilising them.
  • Acceptable outage windows for your business so you can determine what migration approaches and tooling are the best fit.

The last point is a blog on its own. Stay tuned.