Network architecture in the WFH world

Network architecture in the WFH world

Flexible working is touted to be the way of the future, with managers who were once unsure of the positives of working from home now experiencing for the first time just how well they and their teams are managing to get by.

This has given many businesses reason to consider allowing staff to work flexibly (mixing up their work week with time at home and in the office), while others plan on doing away with offices altogether, given the increased productivity they’ve experienced since early March, paired with the cost savings and improved wellbeing of their staff.

However, in the rush to get everyone WFH ready when Covid-19 hit, some businesses didn’t look at their current and future networking needs.

Is your current network fit for purpose?

Many businesses had to improvise with their networking at the beginning on Covid-19 in Australia, so if you felt the strain in the early days of full-time WFH, you’re not alone.

And with a considerable proportion of the workforce expecting to work from home some (or all) of the time in the future, it makes sense to check in and ensure your network is suited to the increase in remotely located staff while still serving the business’ needs at any physical locations.

Optimising bandwidth, performance and application usage in all locations will be key, and while stopgap solutions may have done the job during the worst of the pandemic, scalability and security are also important considerations for the long term.

Need to find out if your network architecture suits your current and future needs. We can help!

Including remote users in network monitoring

Monitoring is key in determining your networking needs to begin with, and once the architecture has been built out, ensuring it’s working optimally.

The vast number of people working outside of the office has tested IT teams, especially those who have traditionally focussed on on-premise networks and have been in control of all aspects of the technology. Troubleshooting issues with an employee’s WiFi and personal devices is a new challenge, so being able to view entire networks, including employee endpoints, allows IT teams to troubleshoot issues before they become larger problems.

For example, to do this, endpoint agents can be added to devices – be it company provided or BYOD – to monitor how the device performs and how the network it is connected to is working.

Security-conscious users will be pleased to know that this form of monitoring looks only at things like WiFi speed and application connections, not the actual data contained within programs. But this does raise an interesting point about how privacy may be traded (in some cases) in return for more flexible working arrangements.

Balancing internal networking and external facing systems

For employees who are able to work within the business’ physical location, their access to wired networks won’t need to change. But those who choose to work from other locations will need to access company networks another way.

Users on internal networks will be able to use company intranets and shared drives as usual, while VPNs (virtual private networks) are recommended to be used by remote workers to connect to company networks as they allow the user’s device to behave as it would if they were in the office.

VPNs allow only trusted users to communicate through them, increasing your security even when some elements are out of your control, and allow for remote access from your IT team, which helps with solving technical issues from a distance.

If you’ve never set a VPN up before and need some advice, we’re here to help .

If you’re looking for a solution beyond VPNs, there are options such as VDI (virtual desktop) that may suit your business. Users see a virtual desktop (which sits within a centralised server) with an array of applications they can use. The benefits include allowing users to customise their desktop, and as each machine still acts separately, this allows additional security benefits for businesses or individuals who deal with confidential information on a regular basis.

And of course, the prevalence of SaaS products such as Office 365, Salesforce and many others means that users can login via a browser, so if you can ensure users have access to stable internet at home, they can work in much the same way as they would in the office.

Need a network strategy? Contact us today to discuss your individual needs.


Is an MSP necessary for SD-WAN?

Is an MSP necessary for SD-WAN?

Businesses that invest in SDWAN reap many benefits – easy to set up and manage, rapid rollout, cost optimisation, improved connectivity – however, there is one difficult decision businesses need to make when they choose an SDWAN solution, who will manage and monitor the network infrastructure. Will the solution be self-managed or fully managed? This decision can be somewhat easy from some businesses; however, for others, it will require more consideration and planning.

Deciding should be made forthright. Businesses need to ask themselves whether they have the internal capability of meeting the service level agreements (SLA), hardware and software patching and updates, installation, and configuration, and supporting the SDWAN and underlay network connections. If a business does not have the required resources and skills in-house, it is strongly advised that it considers having its network fully managed by a managed SDWAN service provider.Be rewarded. Choose Oreta as your MSP

Why should a business consider a managed SDWAN service provider?

If your business has multiple branches and you would like an SD-WAN solution to be rolled out seamlessly, with a guaranteed service level agreement (SLAs), no compatibility issues, reduced and controlled management overheads, regular updates to infrastructure, then your business should consider a managed SD-WAN service.

Implementing and managing your SDWAN solution internally often requires increasing your resources and lengthening the amount of time it will take before you start to see the benefits. It could be very costly, and there is a risk of high turnover during the rollout of the project.

With managed SD-WAN services, the provider will supply all the hardware, software, networking infrastructure needed to deliver the right level of service – for example, connectivity for X number of branches – with appropriate service-level agreements (SLAs) for uptime and performance. This will certainly help you taking control of costs on implementation and management while achieving great outcomes.

Many service providers will focus on providing an end-to-end service, from installation, troubleshooting, monitoring, and optimising the SD-WAN units across each of your business’s workplaces, which in turn will free your IT team up to focus on the applications which will generate business growth.

What if your business decides to implement SD-WAN internally?

If your business has a highly skilled IT team that is guaranteed to be with you for the entire life of the project, can make informed decisions on architecture, has a flexible installation timeframe and budget, which can factor in unforeseen costs, then completing the project internally might be the most appropriate way forward. However, to take full advantage of the technology and capitalise on the solution, there may still be a need to develop new skill sets.

A well-structured vendor-selection process and a clearly defined pilot are critical when choosing which SD-WAN solution (e.g. Velocloud, Cisco Meraki, Cisco Viptela) will best fit the business’s specific needs and continuing to educate the IT team. During the pilot, businesses should use the time to identify operational challenges and how the organisation will best adapt to the changes, and how the solution will best address the real pain points (e.g. improving application performance).

Warning – If your business chooses to self-manage its SD-WAN and doesn’t have a strong internal networking capability, here’s a health warning. The benefits of SDWAN are widely publicised (i.e. zero-touch set-up; centralised control and rapid reconfiguration; reduction in engineering effort; easy optimisation of application traffic management enabled through smart technology and a ready repository of ready-made rules and application policies; all supported by unparalleled performance, visibility, and analytics). While much of this may be true, businesses should still very carefully assess whether they do have the skills and knowledge to self-manage from end to end. It just not that simple, and many things can still go wrong, at a cost. Is it worth the risk?

Be rewarded. Choose Oreta as your MSP

Value-added services

In addition to the abovementioned benefits, a managed SD-WAN service can offer businesses value-added conveniences that are beyond just an end-to-end service, including advisory, assessment, design, implementation of an SDWAN solution.

These services can help businesses manage the different stages of an entire solution cycle, from developing the strategy, vendor/solution selection and evaluation, architecture and design, and implementation. Each of these steps demands a highly skilled team and intensive effort, both of which are beyond a business’s internal IT team. And yet each step is necessary to ensure the right solution is implemented and it delivers the outcome that aligns best with your business strategies.

Yes, we want to work with a managed SD-WAN service provider. How do we choose the right one?

  • Select a capable managed service provider (MSP) with whom you can work with – is it the right fit?
  • Consider any gaps in the offer, which could influence the success of the solution.
  • Identify the key objectives of your SDWAN project to help your decisions on budget and cost control.
  • Define the responsibilities between your IT team and the MSP so that there is no stone left unturned. Ensure that both parties have a clear understanding of the service, operational, and commercial impact of these responsibilities.
  • Develop a view of your end-state network architecture- what are your business’s medium to long-term goals?
  • Maintain some competitive tension but leave things open for ongoing collaboration.

Be rewarded. Choose Oreta as your MSP


What’s important in a true hybrid cloud economy?

What’s important in a true hybrid cloud economy?

We’ll be taking the opportunity to shed some light on why Containers are the exciting new technology many enterprises see as their future at our upcoming Oreta x Google Cloud event in Melbourne on Tuesday 7th of May. You can read more information here.

But, in the meantime, here’s what we have to say about our experience with Containers, Kubernetes and Service Meshes and how it all fits together. Some common questions we’ll address at our event, in this post and in follow up posts include:

  • Are Virtual Machines and Containers the same thing?
  • What are Containers and why are they important?
  • Are Containers just another form of virtualisation?
  • Are Containers and Docker the same thing?
  • Kubernetes: What it is and what it is not?
  • How Kubernetes can help you deploy and scale your containers and applications as container adoption evolves.
  • How Microservices and Service Mesh (Istio) pair together for Cloud-Native apps.

Nearly every cloud vendor has plans to evolve to a Container ecosystem.

Google Cloud at their Next ‘19 event in San Francisco has dedicated 46 sessions solely on Hybrid Cloud, Kubernetes, Containers and Service Meshes. Nearly every hardware, software or cloud vendor including Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform, VMware, Red-Hat, Cisco, IBM etc. are heavily invested, have tailored offerings, or have a road map that includes plans on how their products will evolve to cater to the Container ecosystem.

If we look at the Cloud-Native Foundation (CNCF) landscape and follow the trends, Kubernetes, Containers and Service Meshes form the key ingredients for a true Hybrid Cloud solution. There are big bets and predictions that as the Container ecosystem matures, Kubernetes will be widely adopted as “the defacto abstraction layer” mainly due to its strengths in abstracting away the hardware making it easy to run applications (workloads) in a seamless and standardised way on any supported Public Cloud or on-prem infrastructure.

Why is all this important to you?

It’s no secret that infrastructure modernisation has entered the mainstream and the adoption of Containers and Kubernetes is on the rise for Cloud-Native applications. While most organisations have either heard about Cloud-Native and Containers or understand its importance, most struggle with how to get started.

VMware (founded in 1998) were slowing shaping the IT infrastructure landscape, and in 2004 their introduction of ESXi type1 hypervisor opened possibilities that were never imagined. VM virtualisation was in full steam, and at that time too, many dismissed this virtualisation trend as some fad – but we all know how that turned out.

In 2006, AWS on the back of this VM virtualisation (Xen and later KVM) technology changed the landscape of how infrastructure was going to be delivered and consumed and that shift gave rise to true cloud computing; there were many doubters of this trend too.

In 2013, Docker Inc popularised the concept of Containers by taking the lightweight Container runtime and packing it in a way that made the adoption of Containerisation easy. Even today there are many naysayers.

In 2014, Kubernetes was open sourced by Google Cloud, and this is one of the fastest growing open-source projects.

In 2018, Istio was formed to provide “traffic management, service identity and security, policy enforcement and telemetry services” out of the box delivering another level of abstraction. At Google Cloud Next’19 there have been many sessions that show how Istio’s adoption is increasing developer productivity and observability.

All these advancements in hardware abstraction have led to a paradigm shift in the way IT Infrastructure is delivered and consumed. These improvements in infrastructure delivery and modern forms of infrastructure abstraction, right from the early days of Cloud Computing (Infra as code) to Containers (Docker), Container Orchestration (Kubernetes), Service Meshes (Istio), have fundamentally changed the way organisations, including cloud vendors, build and operate systems.

What are Containers and why are they important?

What are Containers and why are they important?

Since kicking off this series on Containers, Google Next ‘19 has come and gone, and boy, was it packed with some fantastic announcements, most supporting the move of Containers into enterprise. We’ll be putting up a ‘post-Google Next’19’ ‘ summary shortly, going into detail on some of these announcements.

For now, we’re let’s get back to helping you understand Containers at a fundamental level and prepare you for understanding how and why they may benefit your organisation.

As always, if you want further information or have any questions, please reach out to us.

What are containers and why are they important?

These days when we hear the word Containers, most people tend to think Docker and picture the famous blue whale while others imagine actual shipping containers.

Either way, you’re not wrong. Speaking about shipping containers, they’re the perfect analogy to explain what Containers are in a technology context.

The common analogy goes like this: shipping containers solved global trade issues by providing a standard unit of packaging that allows goods to be transported securely regardless of the cargo inside. It doesn’t matter how they are transported or which mode of transport they’re on. Containers in IT do something very similar; they package applications into standard units that can be deployed anywhere.

Containers solve the “works on my machine” issues.

Shipping containers didn’t just cut costs. They changed the entire global trade landscape, allowing new players in the trade industry to emerge. The same can be said for Containers in IT, and more importantly Docker Inc or Docker Containers. Containers don’t just cut cost by allowing efficient use of hardware – they change the entire landscape of how software is packaged and shipped.

While this may put things in perspective, it still leaves many open questions such as:

  • Are Virtual Machines and Containers the same thing?
  • Are Containers just another form of virtualisation?
  • What are ‘cgroups’ and ‘namespaces’?
  • How are Containers different from Virtual Machines?
  • Are Containers and Docker the same thing?
  • What is a Container suitable for?

Are Virtual Machine and Containers the same thing?

The term “virtual machine” was coined by Popek and Goldberg, and according to their original definition:

“a virtual machine is an efficient, isolated duplicate of a real computer machine.”

This means the physical computer (the host) can run several virtual computers (guests). These virtual computers are duplicates or emulations of the host. These virtual computers are also known as guests or virtual machines, each of which can imitate different operating systems and hardware platforms. This is depicted in the diagram below where you can see that multiple Guests are on the same physical hardware.

WhatisaVM

Virtual machines can either be Process Virtual Machines or System Virtual Machines

Process Virtual Machines

Often referred to as Application Virtual Machines, are built to provide an ideal platform to run an intermediate language. A good example is a Java Virtual Machine (JVM) that offers a mechanism to run Java programs as well as programs written in other languages that are also compiled to Java bytecode.

System Virtual Machines

Represents the typical VM as we know it in the infrastructure world. These system Virtual Machines (known colloquially as “VM’s”) emulate separate guest operating systems.

A VM makes it possible to run many separate ‘computers’ on hardware that in reality, is a single computer. In this case, the hypervisor or a VM manager takes over the CPU ring 0 (or the “root mode” in newer CPUs) and intercepts all privileged calls made by guest OS to create an illusion that guest OS has its own hardware.

A visual depiction is shown in the diagram on the right, at the base is the Host computer. Immediately above this is the hypervisor. The hypervisor enables the creation of all the necessary virtualised hardware such as virtual CPU, virtual memory, disk, network interfaces and other IO devices. The virtual machine is then packaged with all the relevant virtualised hardware, a guest kernel that enables communication with the virtual hardware and a guest operating system that hosts the application.

layersofVM

Each guest OS goes through all the process of bootstrapping, loading kernel etc. You can have very tight security, for example, guest OS can’t get full access to host OS or other guests and mess things up.

The question then arises, are Containers just another form of virtualisation?

Yes, Containers are just another form of virtualisation. Containers are OS-level virtualisation. Also known as kernel virtualisation whereby the kernel allows the existence of multiple isolated user-space instances called Containers. These Containers may look like real computers from the point of view of programs running in them.

Containers make use of Linux kernel features called control groups (cgroups) and namespaces that allows isolation and control of resource usage.

kernelVirtualization
containers_01

What are ‘cgroups’ and ‘namespaces’?

cgroups is a Linux kernel feature that makes it possible to control a group or collection of processes. This feature allows it to set resource usage limits on a group of processes. For example, it can control things such as how much CPU, memory, file system cache a group of processes can use.

LXC_linux_internals

Linux namespaces is another Linux kernel feature and a fundamental aspect of Linux containers. While it’s not technically part of the cgroups, namespace isolation is a crucial concept where groups of processes are separated such that they cannot “see” resources in other groups.

Now if Containers are just another form of virtualisation, how are Containers different from Virtual Machines?

Let’s elaborate a bit more on namespaces by using a house as an example. A house that has many rooms and let’s imagine a room represents a namespace.

A Linux system starts with a single namespace, used by all processes, This is similar to a house with no rooms, and all the space is available to the people living in the house. Processes can be used to create additional namespaces and attached to different namespaces. Once a group of processes are wrapped in a namespace and controlled with cgroups, they are invisible to processes that run in another namespace. Similarly, people can create new rooms and live in those rooms. However, with the caveat that once you are in a room, you have no visibility to what takes place in other rooms.

By way of example, if we mount a disk in a namespace A, then processes running in namespace B can’t see or access the contents of that disk. Similarly, processes in namespace A can’t access anything in memory that is allocated to namespace B. This provides a kind of isolation such that processes in namespace A can’t see or talk to processes in namespace B.

This is how Containers works; each Container runs in its own namespace but uses exactly the same kernel as all other Containers.

The isolation happens because kernel knows the namespace that was assigned to the process and during API calls, it makes sure that process can only access resources in its own namespace.

By now it should be clear that you don’t run the full-fledged OS in Containers like in VMs. However, you run different distros of an OS because Containers share the same kernel. Since all containers share the same kernel, they are lightweight. Also, unlike VM, you don’t have to pre-allocate a significant chunk of resources (memory, CPU, etc.) to Containers because we’re not running a new copy of an OS. This gives us the ability to spin up significantly number of Containers on one OS than VMs.

contianersvsVM

As we have seen, Virtual Machines package virtual hardware, application code and an entire operating system whereas containers only package the code and the essential libraries required to run the code.

Virtual Machines also provide an abstraction layer above the hardware so that a running instance can execute on any server. By contrast, a Container is abstracted away from the underlying operating environment, enabling it to be run just about anywhere: servers, laptops, private-clouds, public clouds, etc.

These two characteristics of Containers free developers to focus on application development, knowing their apps will behave consistently regardless of where they are deployed.

While we are on the topic of Containers, another common question we hear. Are Containers and Docker the same thing?

No, they are not. Docker can mean three different things:

  • Docker Inc, the reference to the company that created docker
  • It can be the container runtime and orchestration engine or
  • It can be a reference to the open source Docker Moby project

While Docker Inc, popularised the concept of Containers by taking the lightweight Container runtime and packing it in a way that made the adoption of Containerisation easy, you can run Containers without Docker. There are several alternatives; the most popular is LXC, with subtle differences, which we won’t cover in this blog.

Considering Docker is very popular, here is a quick overview. The initial release of Docker consisted of two major components: the docker daemon and LXC. LXC provided the fundamental building blocks of containers that existed within the Linux kernel this includes things like namespaces and cgrougs. Later LXC was replaced by Libcontainer, that made docker platform agnostic. Docker became more modular and broken down into smaller, more specialized tools, and the result was a pluggable architecture. The main components of Docker are:

Runc: The OCI implementation of the container-runtime specification, it’s a lightweight wrapper for libcontainer, and its sole purpose in life is to create containers.

Containers: Part of the refactor, all the parts responsible for managing the lifecycle of the container was ripped out of docker daemon and put into containers. Containers are accountable for managing the lifecycle of containers, including starting, stopping, pausing and deleting them. It sits between the docker daemon and runs at the OCI layer. It is also responsible for the management of images (push and pull)

Shim: The implementation of daemon-less Containers, it is used to decouple containers. When a new container is created Containers forks an instance of runs for each new container and hands it over to Shim.

  • Containers offer a logical packaging mechanism in which applications can be abstracted from the environment in which they run. This decoupling allows container-based applications to be deployed quickly and consistently, regardless of whether the target environment is a private data centre, the public cloud, or even a developer’s laptop.
  • Containerisation provides a clean separation of concerns, as developers focus on their application logic and dependencies, while IT operations teams can focus on deployment and management without bothering with application details such as specific software versions and configurations specific to the app.
  • The Container serves as a self-isolated unit that can run anywhere that supports it. Moreover, in each of these instances, the container itself will be identical. Thus you can be sure the container you built on your laptop will also run on the company’s servers.
  • The Container also acts as a standardised unit of work or computer. A common paradigm is for each container to run a single web server, a single shard of a database, or a single Spark worker, etc. Then to scale an application, you need to scale the number of containers.
  • In this paradigm, each Container is given a fixed resource configuration (CPU, RAM, # of threads, etc.) and scaling the application requires scaling just the number of containers instead of the individual resource primitives. This provides a much easier abstraction for engineers when applications need to be scaled up or down.
  • Containers also serve as a great tool to implement the microservice architecture, where each microservice is just a set of co-operating containers. For example, the Redis micro service can be implemented with a single master container and multiple slave containers.
  • In the next post, we will go into detail on Hybrid container deployments and understanding how you can run across both on-premise and public cloud(s) to meet your end-user / customer demands. We will cover how to simplify your management in these hybrid/multi-cloud deployments, even when your internal skill sets may still be developing.

Thanks for reading, we appreciate any feedback you may have on this topic and the content level in general. So if you have something you’d like to say, please feel free to drop us a line here.

Journey to Cloud; Assess, Migrate, Modernise

Journey to Cloud; Assess, Migrate, Modernise

On September 18th – 19th, the Google Cloud Summit 2019 will be held at the International Convention Centre (ICC), Sydney. Will you be there? In 2018, Oreta showcased the hybrid container deployment models using Google Kubernetes engine, Istio and Cisco container platforms. In 2019, we are taking it one step further.

At this year’s Summit, we will take you on a Journey to Cloud using live demonstrations and real-life scenarios. You will learn about the power of Cloud Physics – a tool that provides extraordinary visibility into your environment and helps remove the blockers to successfully adopt Google Cloud.

You will be able to see first-hand the simplicity and flexibility of using various migration tools, and how virtual machine workloads can move from another cloud, or on-premise VMware based infrastructure, to the Google Cloud Platform (GCP).

Furthermore, you’ll have the opportunity to observe how services, such as Migrate for Compute Engine and Migrate for Anthos, work to move and convert your workloads from your on-premise environment directly into containers in Google Kubernetes Engine (GKE).

Diagram 1: Journey to Cloud – Assessing, Migrating and Modernising

In this blog, we provide a brief overview of what our focus points will be at the Summit and answer several important questions about migration, including:- Why do we need to understand the reasons for migration?
– Why do we need to quantify the cost of a new cloud model?
– What tools can we use to review the current and future state of every workload within our business, and ensure we make the most suitable and cost-effective decisions regarding what models and features we require? – How can we continue to modernise our IT landscape?

Why do we need to migrate?

Google Cloud Platform (GCP) offers low costs and unique features that make migration very compelling. Unfortunately, many of the processes, calculations, and how-tos are beyond the experience of most organisations. It often is not as simple as a “lift-and-shift” effort. To truly succeed at migration, we need to step back and clearly define and evaluate the purpose and process.


Diagram 2: Migration is not as simple as just a ‘Lift and Shift’.

Assessing: Discover / Assess / Plan Design

Before beginning the assessment process, we need to understand ‘Why’ we are migrating and ask ourselves the following questions:

1. What do we think cloud computing can offer the business, that we do not have today?

2. Do we want to improve our flexibility, including the ability to expand and contract instantaneously without incurring increased capital expenditure for new resources?

3. Do we need particular services that cannot be implemented on-site, such as disaster recovery, security or application mobility?

4. Is the goal to differentiate the business to gain a competitive advantage, or to focus more on collaborative integrated solutions with preferred partners?

Before deciding which workloads should move to the cloud, we need to determine the purpose of the migration and what we want to achieve by this transformation. In fact, defining the purpose of the migration can be as critical as designing the actual platform.


Diagram 3: Defining the purpose of the migration can be as critical as designing the platform.

Migrating: Quantify the cost of the new cloud model.

After you have defined the purpose of migration, you would typically want to quantify the cost implications of the new cloud model. The three main factors you will need to consider are:

1. Differing configurations, commitments, and choices of which workloads to move,

2. The process by which we select and exclude workloads in cloud migration,

3. What workloads will migrate to the new cloud? Selecting all workloads in an environment is typically not a wise choice, especially as the effort to quantify costs per VM can be daunting.

Without a solid tool to review the current and future state of every workload, most organisations are not equipped to make the most cost-effective decisions on what models and features are best to use.

At Oreta, we use CloudPhysics to select and exclude workloads. The tool enables visibility into all workloads and includes tagging and exclusion functionality. CloudPhysics also gives you the ability to conduct rightsizing, which can add further savings to the process.


Diagram 4: Cloud Physics is a solid tool to review the current and future state of every workload.

Modernising: A top priority in the IT landscape

For businesses across the world, the ability to modernise their IT landscape is a top priority. As a Google premier partner, Oreta has been working with Google’s migration tools to support customers during their ‘Journey to Cloud’, and achieve their objectives in modernising.

At the summit, we will demonstrate several of Google’s latest offerings including;

1. “Migrate for Anthos”, which enables you to take VMs from on-prem, or Google Compute Engine, and move them directly into containers running in Google Kubernetes Engine (GKE),


Diagram 5: Migrate to Anthos’ is one of Google’s latest migration tools Oreta will be showcasing.

2. “Migrate for compute engine”, which allows you to take your application running in VM’s from on-prem to Google Compute Engine, and caters for;

– Fast and easy validation before you migrate,
– A safety net by fast on-premises rollback,
– Migration waves to execute bulk migration.


Diagram 6: Migrate to Compute Engine’ is one of Google’s latest migration tools Oreta will be demonstrating.

The Google Cloud Summit in 2019 is set to be better than ever. We hope this short glimpse into what Oreta will be showcasing has inspired you to come along, learn about some of Google’s latest offerings and enjoy our live demonstrations.

If you haven’t registered to attend the Summit yet it’s not too late. Simply register here.

If you are unable to attend the Summit but would like more information on the above, or any other service Oreta provides, please phone 13000 Oreta or contact us here.