Welcome!

PHP Authors: Liz McMillan, Carmen Gonzalez, Hovhannes Avoyan, Lori MacVittie, Trevor Parsons

Blog Feed Post

RHEV 3.0 Sets Stage for VMware Challenge

Originally posted on eweek: http://www.eweek.com/c/a/Virtualization/RHEV-30-Sets-Stage-for-VMware-Challenge-885501/ by: Cameron Sturdevant

Red Hat Enterprise Virtualization for Servers significantly updates management capabilities and supports giant virtual machines.

The release of Red Hat Enterprise Virtualization 3.0 signals that 2012 will be the year that IT managers at organizations of all sizes have real choices to make when it comes to virtualizing workloads in the data center. RHEV 3.0 and the Microsoft Windows Server 8 release candidate both offer a challenge to the currently unrivaled data center virtualization lead position held by VMware vSphere 5.

Until now, IT virtualization managers could use VMware vSphere without much question to run workloads of all types. RHEV 3.0 successfully challenged this operating assumption in tests at eWEEK Labs. The revamped Red Hat Enterprise Manager (RHEM) for Servers— with sizeable increases in virtual machine (VM) resource allocations and tighter integration with the Red Hat Enterprise Linux 6.2 (RHEL 6.2)—means that IT managers can begin to consider RHEV 3.0 a viable competitor to other virtualization platforms, including VMware.

RHEV 3.0 became available Jan. 18. RHEV for Servers Standard (business hours support) costs $499/socket/year. RHEV Premium (24/7 support) lists for $749/socket/year.

RHEV 3.0 is built on the Kernel-based Virtual Machine KVM) hypervisor, which itself started as a Red Hat project that was released to the open-source community.

HOW WE TESTED

I tested RHEV 3.0 at eWEEK’s San Francisco lab by installing the RHEV Manager on an Acer AR 380 F1 server. It’s equipped with two Intel Xeon X5675, six-core CPUs, 24GB of DDR3 (double data rate type 3) RAM, eight 146GB 15K rpm server-attached storage (SAS) hard-disk drives and four 1G bit on-board LAN ports. This powerhouse server provided more than enough compute power to run the RHEV Manager.

After setting up the RHEV Manager and integrating it with our iSCSI shared-storage system, I installed the RHEV Hypervisor on two other physical hosts: one an Intel-based Lenovo RD210 and the other an AMD-based whitebox server. After registering all the components with the Red Hat Network and ensuring that my test systems were correctly subscribed to the required channels, I was ready to fire up the RHEV Manager and start my evaluation.

RHEV 3.0 made significant infrastructure management improvements. The biggest news here for IT managers is the new REST API access that enables full access to the RHEV Manager. While this feature wasn’t significantly tested in my evaluation of RHEV, it does set the stage for third-party management tool integration. IT managers who are making strategic decisions now about possible contenders for production-level data center projects should take note of this RESTful API access.

Making the interface available is only half the battle for Red Hat. IT managers should watch to see how quickly management vendors move to use the API to provide management tools. Because RHEV will likely be joining a data center environment that already has VMware installed, it will be particularly interesting to see if vendors that make VMware-centric tools add support for RHEV. If they do, then IT managers will have even more reason to add Red Hat to their evaluation list for enterprise virtualization projects.

Aside from the addition of the REST API, Red Hat added a number of important convenience features. For example, after installing my two RHEV Hypervisor physical hosts, I was able to approve them as members of the RHEV environment with a single click of the new “approve” button.

The administrative portal interface now includes links to the administration and user portals, along with other changes that made it much easier to track my environment. A tree view is now used to show data centers, clusters, storage domains, physical hosts and virtual machines. IT managers who have experience with VMware’s vCenter interface will quickly see the similarity between the two management system layouts.

There is now more granularity in user administration roles. I was able to use the revised User Portal to provide restricted access to administrative functions. After first integrating my RHEV 3.0 environment with the Labs Microsoft Active Directory services, I was able to assign roles to the users in the directory.

In my tests, I used the default roles that RHEV provided. In one case, I used the Network Admin role to restrict access to the networks in my eWEEK data center network. I was easily able to clone user roles and then make changes in permission levels for those roles. However, in most cases, IT managers will find that Red Hat has provided sufficiently differentiated roles in the default installation.

RHEV has joined VMware in supporting giant virtual machines. Although the eWEEK Labs test infrastructure isn’t equipped to create these machines, there is little doubt that Red Hat can support the new maximum VM sizes that are supported in RHEV 3.0.
In this version, Red Hat supports up to 64 virtual CPUs and up to 2TB of RAM per virtual machine. This matches VM sizes currently supported by VMware and announced as supported in Microsoft Windows Server 8 Hyper-V. 

Learn more about RHEV 3

Read the original blog entry...

More Stories By Unitiv Blog

Unitiv, Inc., is a professional provider of enterprise IT solutions. Unitiv delivers its services from its headquarters in Alpharetta, Georgia, USA, and its regional office in Iselin, New Jersey, USA. Unitiv provides a strategic approach to its service delivery, focusing on three core components: People, Products, and Processes. The People to advise and support customers. The Products to design and build solutions. The Processes to govern and manage post-implementation operations.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...