Welcome!

Blog Feed Post

RHEV 3.0 Sets Stage for VMware Challenge

Originally posted on eweek: http://www.eweek.com/c/a/Virtualization/RHEV-30-Sets-Stage-for-VMware-Challenge-885501/ by: Cameron Sturdevant

Red Hat Enterprise Virtualization for Servers significantly updates management capabilities and supports giant virtual machines.

The release of Red Hat Enterprise Virtualization 3.0 signals that 2012 will be the year that IT managers at organizations of all sizes have real choices to make when it comes to virtualizing workloads in the data center. RHEV 3.0 and the Microsoft Windows Server 8 release candidate both offer a challenge to the currently unrivaled data center virtualization lead position held by VMware vSphere 5.

Until now, IT virtualization managers could use VMware vSphere without much question to run workloads of all types. RHEV 3.0 successfully challenged this operating assumption in tests at eWEEK Labs. The revamped Red Hat Enterprise Manager (RHEM) for Servers— with sizeable increases in virtual machine (VM) resource allocations and tighter integration with the Red Hat Enterprise Linux 6.2 (RHEL 6.2)—means that IT managers can begin to consider RHEV 3.0 a viable competitor to other virtualization platforms, including VMware.

RHEV 3.0 became available Jan. 18. RHEV for Servers Standard (business hours support) costs $499/socket/year. RHEV Premium (24/7 support) lists for $749/socket/year.

RHEV 3.0 is built on the Kernel-based Virtual Machine KVM) hypervisor, which itself started as a Red Hat project that was released to the open-source community.

HOW WE TESTED

I tested RHEV 3.0 at eWEEK’s San Francisco lab by installing the RHEV Manager on an Acer AR 380 F1 server. It’s equipped with two Intel Xeon X5675, six-core CPUs, 24GB of DDR3 (double data rate type 3) RAM, eight 146GB 15K rpm server-attached storage (SAS) hard-disk drives and four 1G bit on-board LAN ports. This powerhouse server provided more than enough compute power to run the RHEV Manager.

After setting up the RHEV Manager and integrating it with our iSCSI shared-storage system, I installed the RHEV Hypervisor on two other physical hosts: one an Intel-based Lenovo RD210 and the other an AMD-based whitebox server. After registering all the components with the Red Hat Network and ensuring that my test systems were correctly subscribed to the required channels, I was ready to fire up the RHEV Manager and start my evaluation.

RHEV 3.0 made significant infrastructure management improvements. The biggest news here for IT managers is the new REST API access that enables full access to the RHEV Manager. While this feature wasn’t significantly tested in my evaluation of RHEV, it does set the stage for third-party management tool integration. IT managers who are making strategic decisions now about possible contenders for production-level data center projects should take note of this RESTful API access.

Making the interface available is only half the battle for Red Hat. IT managers should watch to see how quickly management vendors move to use the API to provide management tools. Because RHEV will likely be joining a data center environment that already has VMware installed, it will be particularly interesting to see if vendors that make VMware-centric tools add support for RHEV. If they do, then IT managers will have even more reason to add Red Hat to their evaluation list for enterprise virtualization projects.

Aside from the addition of the REST API, Red Hat added a number of important convenience features. For example, after installing my two RHEV Hypervisor physical hosts, I was able to approve them as members of the RHEV environment with a single click of the new “approve” button.

The administrative portal interface now includes links to the administration and user portals, along with other changes that made it much easier to track my environment. A tree view is now used to show data centers, clusters, storage domains, physical hosts and virtual machines. IT managers who have experience with VMware’s vCenter interface will quickly see the similarity between the two management system layouts.

There is now more granularity in user administration roles. I was able to use the revised User Portal to provide restricted access to administrative functions. After first integrating my RHEV 3.0 environment with the Labs Microsoft Active Directory services, I was able to assign roles to the users in the directory.

In my tests, I used the default roles that RHEV provided. In one case, I used the Network Admin role to restrict access to the networks in my eWEEK data center network. I was easily able to clone user roles and then make changes in permission levels for those roles. However, in most cases, IT managers will find that Red Hat has provided sufficiently differentiated roles in the default installation.

RHEV has joined VMware in supporting giant virtual machines. Although the eWEEK Labs test infrastructure isn’t equipped to create these machines, there is little doubt that Red Hat can support the new maximum VM sizes that are supported in RHEV 3.0.
In this version, Red Hat supports up to 64 virtual CPUs and up to 2TB of RAM per virtual machine. This matches VM sizes currently supported by VMware and announced as supported in Microsoft Windows Server 8 Hyper-V. 

Learn more about RHEV 3

Read the original blog entry...

More Stories By Unitiv Blog

Unitiv, Inc., is a professional provider of enterprise IT solutions. Unitiv delivers its services from its headquarters in Alpharetta, Georgia, USA, and its regional office in Iselin, New Jersey, USA. Unitiv provides a strategic approach to its service delivery, focusing on three core components: People, Products, and Processes. The People to advise and support customers. The Products to design and build solutions. The Processes to govern and manage post-implementation operations.

Latest Stories
Internet-of-Things discussions can end up either going down the consumer gadget rabbit hole or focused on the sort of data logging that industrial manufacturers have been doing forever. However, in fact, companies today are already using IoT data both to optimize their operational technology and to improve the experience of customer interactions in novel ways. In his session at @ThingsExpo, Gordon Haff, Red Hat Technology Evangelist, shared examples from a wide range of industries – including en...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
In his session at @DevOpsSummit at 20th Cloud Expo, Kelly Looney, director of DevOps consulting for Skytap, showed how an incremental approach to introducing containers into complex, distributed applications results in modernization with less risk and more reward. He also shared the story of how Skytap used Docker to get out of the business of managing infrastructure, and into the business of delivering innovation and business value. Attendees learned how up-front planning allows for a clean sep...
Detecting internal user threats in the Big Data eco-system is challenging and cumbersome. Many organizations monitor internal usage of the Big Data eco-system using a set of alerts. This is not a scalable process given the increase in the number of alerts with the accelerating growth in data volume and user base. Organizations are increasingly leveraging machine learning to monitor only those data elements that are sensitive and critical, autonomously establish monitoring policies, and to detect...
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. Jack Norris reviews best practices to show how companies develop, deploy, and dynamically update these applications and how this data-first...
Intelligent Automation is now one of the key business imperatives for CIOs and CISOs impacting all areas of business today. In his session at 21st Cloud Expo, Brian Boeggeman, VP Alliances & Partnerships at Ayehu, will talk about how business value is created and delivered through intelligent automation to today’s enterprises. The open ecosystem platform approach toward Intelligent Automation that Ayehu delivers to the market is core to enabling the creation of the self-driving enterprise.
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We're here to tell the world about our cloud-scale infrastructure that we have at Juniper combined with the world-class security that we put into the cloud," explained Lisa Guess, VP of Systems Engineering at Juniper Networks, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Historically, some banking activities such as trading have been relying heavily on analytics and cutting edge algorithmic tools. The coming of age of powerful data analytics solutions combined with the development of intelligent algorithms have created new opportunities for financial institutions. In his session at 20th Cloud Expo, Sebastien Meunier, Head of Digital for North America at Chappuis Halder & Co., discussed how these tools can be leveraged to develop a lasting competitive advantage ...
WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web communications world. The 6th WebRTC Summit continues our tradition of delivering the latest and greatest presentations within the world of WebRTC. Topics include voice calling, video chat, P2P file sharing, and use cases that have already leveraged the power and convenience of WebRTC.
"We're a cybersecurity firm that specializes in engineering security solutions both at the software and hardware level. Security cannot be an after-the-fact afterthought, which is what it's become," stated Richard Blech, Chief Executive Officer at Secure Channels, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.