Open In App

Sky Computing

Last Updated : 19 Feb, 2024
Improve
Improve
Like Article
Like
Save
Share
Report

Sky computing is an emerging computing model where resources from multiple cloud providers are leveraged to create large-scale distributed infrastructures. Sky computing arises as a metaphor to illustrate a layer above cloud computing because such dynamically provisioned distributed domains are built over several clouds.

It can be described as a management up layer of an environment of clouds, offering variable computing capacity and storage resources with dynamic support to real-time demands. Laying a virtual site over distributed resources, combining the ability to trust remote sites with a trusted networking environment, originates a highly elastic response to incoming ests with a seemingly infinite pool of accessible resources.

Architecture Of Sky Computing

The main idea is to create a turn-around model to enable intensive computing in cloud networks. This is hoped to be achieved by enlarging th set of available resources in a way that overcomes the problems referred to before, like elevated latency between nodes. Also, it must be a cross-cloud provider in order to combine resources. To achieve this, there must be a structure capable of receiving instructions, process, and returning results from all different underlying cloud systems. The architecture of sky computing.

Each cloud provider has a specific API that makes available an interaction with their own resources. All these can be aggregated by a middleware layer which allows controlling and managing resources by translating every command to the correspondent provider API. Abstraction, from bottom to top, is the key to building a consistent system. The upper layer, sky computing integrates the last level of Infrastructure as a Service and the next layer of Software as a Service. This is a critical layer, as it must be as comprehensive as possible in features and capabilities. Here, our main focus is HPC, but must be possible to deal with other applications too. Management, with accounting and billing. Should be well developed as well as oring a and job submission.

(i) Accounting and Billing – When providing users with a complex structure like sky computing, it is crucial that the right usage is being kept accounting and billing. Assuring a righteous accounting can make monthly use of history analysis and the right planning for future use. saved data also allows to billing the registered users for the used resources, private and public combined.

(ii) Monitoring Software – Monitoring is also very important in cloud management. Probing the resources allows to register and control resource usage for a healthy running. For instance, detecting problems (out of memory, power off, overheated CPU, etc.) prematurely for an early resolution. Nagios is a monitoring system that enables organizations to identify and resolve IT infrastructure problems before they affect critical business processes. It delivers awareness of IT infrastructure’s status and allows it to detect and repair problems and mitigate future issues before they affect users.

(iii) Customizable Scheduler – A scheduler is a running daemon that coordinates the virtual requests and the available resources using different scheduling policies. It basically assigns to each virtual machine (VM) a physical host and a storage area depending on resource availability, obeying pre-defined policies. Neither Deltacloud nor Aeolus have a scheduler, they make the deployment and rely on the destination cloud’s management. Some open-source projects available are Haizea and Cloud Scheduler.

(iv) Cloud Computing Middleware -Middleware is a very important and useful part of the chain value. It provides an abstraction that allows to development of applications without being tied to an explicit cloud vendor. The drawback is that API operations are limited (providers’ operation set is larger) and can correspond to loss of performance. The sky computing management layer relies on the lower layer resources and interface, so it should be extremely stable and dependable. There are some projects undergoing for middleware, like the open-source libcloud, Deltacloud, clouds, or fog, while others, like abiquous, Kaavo, or Enstratius offer a more professional customized service and support, in exchange for a monthly fee.

(v) System Assembling – The hardest part is to connect all pieces of the puzzle, thus it was successful. We managed to get Aeolus working with a hybrid infrastructure, featuring Amazon and OpenNebula with a custom scheduler Haizea and Ganglia. The structure was functional and stable, however the lack of some important pieces reduced the structure’s flexibility and agility, despite the occasional improvement by new tweaks on fresh software updates.nd also make more unified decision on the open source cloud platform

Characteristics Of Sky Computing

(i) Flexibility and Scalability – The sky can quickly scale upto thousands of servers or services to make resources available as they are needed. Most cloud providers are extremely reliable in providing their services, with many maintaining 99.99% uptime. The connection is always on and as long as workers have an internet connection, they can get to the applications they need from practically anywhere. Some applications even work off-line.

(ii) Security and Trust – In the past, site owners could not trust a remote resource because they had no control over its configuration. Now that clouds let users control remote resources, however, this concem is no longer an issue. Combining the ability to trust remote sites with a trusted networking environment, a virtual site can now exist over distributed resources.

(iii) Efficiency – Advances in processing, communication and systems/middleware technologies had as a result new paradigms and platforms for computing.

(iv) Flexible Costs – The costs of sky computing are much more flexible than traditional methods. Companies only need to commission and thus only pay for server and infrastructure capacity as and when it is needed. More capacity can be provisioned for peak times and then de-provisioned when no longer needed. Traditional computing requires buying capacity sufficient for peak times and allowing it to sit idle the rest of the time.

(v) Resource Management – Sky computing facilitates the implementation and realization of emerging technologies to deliver better customer experience with improved and real-time interaction across the business operations to maximize the value for the consumer and stakeholders where sustainability can be achieved with increased profitability and competitiveness.

Sky Computing Providers

(i) Appliance Providers – Appliances can integrate the information using any configuration method from any appliance provider. This information in the templates is application specific and potentially different from appliance- to-appliance, but the templates themselves are uniform, and any context broker can process them. Example – Amazon was the first major could providerAmazon Simple Storage Service (Amazon S3), Apple, Cisco, Citrix, IBM, Joyent, Google, Microsoft, Rackspace and Salesforce.

(ii) Cloud Broker – An entity that manages the use, performance and delivery of cloud services and intermediates the relationships between cloud providers and cloud consumers and negotiation, configuration done manually. Example-AWS marketplace from Amazon, Blue Wolf, CloudCompare, CloudMore, which offers cloud services aggregation and activation through partners. The company serves the UK, Sweden, Finland, Denmark, Ireland, and more. Key partners include IBM, Microsoft, HP Autonomy, VMWare, and Cryptozone.

(iii) SaaS (Software as a Service) – It represent the largest cloud market and are still growing quickly. SaaS uses the web to deliver applications that are managed by a third-party vendor and whose interface is accessed on the clients’ side. Examples – Google Apps, Salesforce, Workday, Concur, Citrix Go to Meeting, Cisco Web ExCommon. (iv) PaaS (Platform as a Service) – These are used for applications and other development, while providing cloud components to software. PaaS makes the development, testing, and deployment of applications quick, simple, and cost-effective. With this technology, enterprise operations, or a third- party provider, can manage OSes, servers, storage, networking, and the PaaS software itself. Examples – Engine Yard, RedHat OpenShift, Google App Engine, Heroku, appFog (aF), Windows Azure, Amazon Weg Service(AWS).

(v) IaaS (Infrastructure as a Service) – These are self-service models for accessing, monitoring, and managing remote data center infrastructures, such as compute (virtualized or bare metal), storage, networking, and networking services (e.g. firewalls). Instead of having to purchase hardware outright, users can purchase laaS based on consumption. Example – Amazon Web Services (AWS), Cisco Metapod, Microsoft Azure, Google Compute Engine (GCE).

Benefits Of Sky Computing

(i) Single Networking Context – All-to-all connectivity is supported in sky computing, all the resources available, servers and VM’s used are connected in a single network for flexible and fast communication.

(ii) Single Security Context – Trust between all entities over sky is maintained. The information shared/send among the machines client and host is secured by a layer. The information is encrypted while sharing to prevent data leaks.

(iii) Equivalent to Local Cluster – The cloud platform is compatible with legacy code or the code that used to work on local machines without remote server. Its easy to deploy the application or code to the cloud.

(iv) Controlled resources – Sky computing allows users to control resources on their own. So trust relationships within sky computing are the same as those within a traditional non distributed site, simplifying how remote resources interact.

(v) Scalability – It is dynamically scalable as resources are distributed over several clouds. Sky computing provides more scalability as compare to cloud computing.

Challenges of Sky Computing

(i) Intercloud resource creation and management – We know that sky computing is leveraging different cloud providers. The infrastructure or architecture of cloud providers differ and it becomes complex to customize resources over various cloud platform. For example : An organization uses Azure Cloud for data storage and Google Cloud Platform for computational tasks over data. Now one need to take care about the platform specific features to deal with the storage and computation of data over the two different cloud.

(ii) Efficient intercloud communication – The transfer of data over sky computing is done between various cloud providers which can be said as intercloud communication, lack of standardized protocols for communication is challenge. For example : Suppose an organization uses different cloud for high availability and performance of its product. The transfer of data over multiple cloud requires statndard protocol which suites the environment of all clouds.

(iii) Efficient distribution of tasks – Load balancing in sky computing is a big challenge, marking resource availability and distributing the tasks requires real time monitoring. For example : Each cloud platforms has real time monitoring for load balancing within its region and zone, what if an organization decides to use multiple cloud for load balancing in peak hours its challenging to have a monitor resources over different cloud.

(iv) Fault Tolerance – Each cloud provider is responsible for fault tolerance on their particular platform, in sky intercloud communication needs to be monitored through failure detection and recovery mechanism. For example : Suppose an application deals with important data like health reports which needs to be fault tolerant and always available. It maintains redundant data on different cloud platform. Now the challenge is to have a fault detection mechanism which recovers the data from the other source.

(v) Adaptability to resource dynamicity : The data over sky needs to adapt the continous change in resouce allocation and prices of regions. Intelligent automation algorithms need to be developed in order to maintain resource availability and allocation over different cloud providers. For example : Suppose a application uses some complex ml algorithm. It needs to employ intelligent automation algorithms to dynamically allocate GPU from different providers. The algorithms assess the changing workload and optimize resource usage, ensuring cost-effectiveness and efficient project execution.

Sky Computing – FAQ’s

What is the role of load balancing in Sky computing?

Sky computing is intelligently connecting the different cloud servers, it leverages the load balancing approach to do this.It achieves uniform processing times for each device, ensuring consistent completion of computational tasks and optimizing the overall training time. To attain this effective allocation, it is essential to first gather information about both the model and the device, and subsequently execute the appropriate allocation operation.

Why is Sky computing preferred over traditional federated learning?

Sky computing achieves a performance improvement of up to 55% over current allocation methods or traditional federated learning. This is the reason is marked as a revolutionary introduction in cloud commputing.

From which two dimension sky computing collects the data?

Sky Computing needs to collect data from two dimensions, the model and the device.

Model : It collects the required memory footprint and computation for each layer of the model.

Device : It collects data pertaining to communication latency, computational power and memory availability



Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads