Welcome!

Related Topics: Containers Expo Blog

Containers Expo Blog: Article

The Foundation of Cloud Computing Infrastructures: Virtualization

Research Challenges in Cloud Infrastructures for 2009

Ignacio Martín Llorente's Blog

One of the relevant contributions of cloud computing is the Infrastructure as a Service (IaaS) model. There are a number of research challenges in cloud infrastructures that, in my opinion, will need to be addressed in 2009. The open research issues are mainly related to new virtualization technologies to enable efficient, dynamic and scalable Cloud operation and interoperation.

Cloud computing enables the deployment of an entire IT infrastructure without the associated capital costs, paying only for the used capacity. The new “Infrastructure as a Service” paradigm has been introduced to better respond to changing computing demands, so allowing to add and remove capacity in order to meet peak or fluctuating service demands. Amazon Elastic Compute Cloud (Amazon EC2), GoGrid and FlexiScale are examples of cloud providers of elastic capacity, offering an interface for remote management of virtualized server instances within their proprietary infrastructure. These commercial clouds do not provide any detail about the internal management of the virtual machines or the physical infrastructure.

Open source cloud computing tools, such as Eucalyptus and Globus Nimbus, let organizations build and customize there own cloud infrastructure. These relevant tools focus on the client perspective, being fully functional with respect to cloud compatible interfaces and providing higher level functionality for security, contextualization and image management. However, they do not support the dynamic allocation and balance of computing resources among virtual machines to meet the scalable and dynamic computing requirements of enterprise datacenters, such as flexible support for dynamic virtual machine placement and infrastructure management.

The RESERVOIR Project

RESERVOIR is the main European research initiative in virtualized infrastructures and Cloud Computing. RESERVOIR is a joint research programme coordinated by IBM Haifa with 13 European partners: Elsag Datamat, CETIC, OGF.eeig, SAP Research, Sun Microsystems, Telefonica I+D, Thales, Umea University, University College of London, DSA-Research at Universidad Complutense de Marid, University of Lugano and University of Messina. The aim of this project is to develop the open-source technology to enable deployment and management of complex IT services across different administrative domains. Its open-source approach will support the definition of open standards for cloud computing, breaking the lock-in imposed by vendors today and allowing any organization to build its own local or public cloud infrastructure. The first-class management entity is a complex service, as a group of interconnected virtual machines with placement constrains, that can run across different cloud sites, being federation of cloud providers one of its main research challenges.

The cloud infrastructure layer in RESERVOIR is the VEE Management layer, which provides execution of groups of interconnected virtual machines as a service. Its other two main research activities complement this layer to provide service management functionality on top of infrastructure clouds (Service Management ActivityTelefonica I+D) and to provide virtualization platforms with advance functionality for performance and reallocation optimization ( coordinated by VEE Infrastructure Enablement Activity coordinated by IBM Haifa).

In the context of the VEE Management Activity, coordinated by DSA-esearch at UCM, the project is conducting research in cloud infrastructures to meet the main challenges in the dynamic and scalable management of virtual machines in datacenters, such us the efficient management of groups of virtual machines within and across sites, elasticity support to meet variations in service workload, dynamic placement algorithms, architectures and placement heuristics for federation of sites, and enhanced Cloud interfaces.

Private Cloud Infrastructures

A key component in a cloud infrastructure backend is the distributed virtual infrastructure manager (also called internal cloud or distributed VM Manager), which allows the dynamic placement of virtual machines on a pool of physical resources according to business needs. There is a growing interest in the community in these tools for leasing compute capacity from the local infrastructure (see for example the conclusions of the Cisco Cloud Computing Research Symposium by Ruben S. Montero, co-leader of the OpenNebula project at DSA-research, and the cloud computing predictions for the new year by Randy Bias, VP Technology Strategy at GoGrid). The aim of these deployments is not to expose to the world a cloud interface to sell capacity over the Internet, but to provide a dynamic and flexible private infrastructure to run service workloads.

The OpenNebula VM Manager is a core component in the RESERVOIR VEE Management layer that is being enhanced to meet the demanding requirements of the business use cases in the project. This open-source alternative to commercial tools for VM management provides an efficient, dynamic and scalable management of VMs within datacenters, private clouds, involving a large amount of virtual and physical servers. OpenNebula can interface with a remote cloud site, being the only tool able to access on-demand to Amazon EC2 for dynamic scaling the local infrastructure based on actual usage. Furthermore, the integration of OpenNebula and Haizea provides the only distributed virtual infrastructure management solution offering advance reservation of capacity.

Further Research in Cloud Infrastructures

There are many other topics for further research in cloud infrastructures that will be addressed in 2009:

  • Concerning the application of cloud computing, relevant topics are performance and reliability running scientific and business applications in Clouds; content distribution systems using Clouds; and Grid, HPC and data-intensive computing in Clouds.
  • Concerning technologies to enable Cloud Computing, interesting topics are new architectures for Cloud infrastructures; Cloud interfaces, programming models and tools; integration with infrastructures for Grid Computing; SLAs, privacy, security and pricing; management of network capacity; heuristics for energy efficiency and high availability; and advance reservation of capacity.
  • Concerning federation of Cloud Providers, research topics are interoperability and portability between Cloud providers; open business policies framework for relationships between infrastructure providers; and higher value self-awareness, self-knowledge, and self-management capabilities.

Although there exist several commercial clouds selling computing power, there are many open research issues to build the next generation of cloud infrastructures. These topics are mainly related to new technologies to enable efficient, dynamic and scalable Cloud operation and interoperation.

 

More Stories By Ignacio M. Llorente

Dr. Llorente is Director of the OpenNebula Project and CEO & co-founder at C12G Labs. He is an entrepreneur and researcher in the field of cloud and distributed computing, having managed several international projects and initiatives on Cloud Computing, and authored many articles in the leading journals and proceedings books. Dr. Llorente is one of the pioneers and world's leading authorities on Cloud Computing. He has held several appointments as independent expert and consultant for the European Commission and several companies and national governments. He has given many keynotes and invited talks in the main international events in cloud computing, has served on several Groups of Experts on Cloud Computing convened by international organizations, such as the European Commission and the World Economic Forum, and has contributed to several Cloud Computing panels and roadmaps. He founded and co-chaired the Open Grid Forum Working Group on Open Cloud Computing Interface, and has participated in the main European projects in Cloud Computing. Llorente holds a Ph.D in Computer Science (UCM) and an Executive MBA (IE Business School), and is a Full Professor (Catedratico) and the Head of the Distributed Systems Architecture Group at UCM.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
"Infoblox does DNS, DHCP and IP address management for not only enterprise networks but cloud networks as well. Customers are looking for a single platform that can extend not only in their private enterprise environment but private cloud, public cloud, tracking all the IP space and everything that is going on in that environment," explained Steve Salo, Principal Systems Engineer at Infoblox, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventio...