Blog Feed Post

vCloud Automation Center -vCAC 5.1 – Conifiguring Multi-Machine Services

A Multi-Machine service is a Blueprint that is configured to deploy multiple blueprints/machines from one request. Essentially let’s say you have a blueprint that is configured to deploy a Windows 2008 Server with SQL, and you have another blueprint that is configured to deploy a Windows 2008 Server with an application installed such as vCAC. You can create a Multi-Machine service blueprint that contains both blueprints. You Multi-Machine service can be vCAC Server and when it’s requested it will deploy both blueprints each with their own configuration as well as overall multi-machine configuration that can be laid on top.

If you think about it vCAC can manage different hyperervisor such as vSphere through vCenter or vCD, Hyper-V, XenServer, it can manage Physical Servers, and external Amazon EC2 resources. So you can have individual blueprints configured to deploy to these different types of infrastructure. This gives you incredible flexibility. You could have a Multi-Machine service that has a blueprint that provisions an application server to a vSphere environment, a database server to a physical server, and multiple web servers to Amazon EC2. So let’s see how we configure a basic Multi-Machine Service.

Be sure that you have completed the steps in the below posts before configuring a multi-machine service:

First things first. We need at least two blueprints to be able to create a multi-machine service so let’s make a second blueprint. This can be done very easily by making an existing blueprint copyable. To do this perform the following:

Making a Blueprint Copyable

1. Go to “Enterprise Administrator“, then select “Global Blueprints” and select a blueprint and select “Edit
2. On the “Blueprint Information” check “Master (copyable)” and click “ok”

Create a New Blueprint

3. On the “Global Blueprints” page hover over “New Blueprint” in the upper right corner and select “Virtual
4. On the “New Blueprint – Virtual” page click the drop down box at the top of the page labeled “Copy from existing blueprint” and select the blueprint you want to copy.
5. Next give your new blueprint a “Name“, assign it to a “Group” and click “Ok“. (You can make additional changes to your blueprint if you like, these are just the minimal setting you need to set when you copy a blueprint.)
Now let’s crate our Multi-Machine Service blueprint by completing the following steps.

Creating a Mutil-Machine Service Blueprint

6. Go to “Enterprise Administrator“, select “Global Blueprints“, and then hover over “New Blueprint” in the upper right corner and select “Multi-Machine“.
7. On the “Blueprint Information” tab gibe your blueprint a “Name“, add it to a “group” and then select the “Build Information” tab.
8. On the “Build Information” tab select “Add Blueprints” next to “Component machines“.
9. When the “Add Blueprints” dialog box appears select “two” “blueprints” form the list and click “Ok
10. When the dialog closes you will then see your selected “blueprints” in the “Component Machines” section. You can now click the “Pencil” next to the blueprint to configure additional settings such as display “Name“, “Max” number of machine that can be added to request, and the “Startup” and “Shutdown” order.
11. Under “Machine Resources” you can configure a “Lease” period for the “Multi-Machine” app that will override the individual blueprint “lease” periods. Next click the “Scripting” tab.
12. Here you can define scripts that execute during the “Provisioning Process“, “Startup Process” and the “Shutdown process“. This sections allows you to select a workflow you would like to execute for each. This will be covered in more detail in a separate article. Next click the “Security” tab.
13. Here you can choose “Security” options that apply to the “Multi-Machine” service as a whole. Click “Ok” to create your Multi-Machine Service Blueprint.
Finally let’s request a Multi-Machine Service.

Requesting a Multi-Machine Service

14. Navigate to “Self-Service” and select “Request Machine” form the standard portal (http://host/dcac and select the “Multi-Machine” blueprint you just created. Alternatively from the Self-Service portal if installed (http://host/dcacselfservice) you can select “New-Request” and select the “Multi-Machine” blueprint you just created. On the initial request screen you can set the “lease” duration for the request.

15. In the “standard portal” you can select the component machines listed and configure additional setting including adding additional storage to your machines. The “Request Information” tab also includes “CPU” and “Memory” selections if the blue-print was configured to be with min and max values for the resources. Click “Ok” to submit your request. Alternatively on the “self-service” portal you can also customize the request by clicking the “customize button“.

16. In the “Self-Service” portal you will click “Next” and be taken to the “Custom Properties” step. (It’s important to note standard users will not see the “add property” dialog. They will only be able to modify custom fields that are defined in the blueprint. This will be covered in more detail in a future article.). Then click “Next

17. Next in the “Self-Service” portal you will be brought to the “Confirm and Finish” step where you will see a “Daily Cost” break out. Click “Finish” to submit your request.

18. Now that you have submitted your request it will show up under your “My machines” section where you can monitor the status. In the “Self-Service” portal you can expand the details to see the machines that are part of the request.

19. In the “standard portal” you can hover over the “Multi-Machine Service” and then select “View Components” to get more details on the progress of each machine. In the “Self-Service” portal click “View-machines” to get more details.

20. Here you can see the detailed status for each machine that is part of the service.

Read the original blog entry...

More Stories By Sidney Smith

Sid Smith, founder of DailyHypervisor is considered to be a cloud expert in the IT field with over 10 years experience in Virtualization, Automation, and Cloud technologies. Sid Smith started in the industry designing and implementing large scale enterprise server and desktop virtualization environments for fortune 100 and 500 companies. He later went on to become a key employee at DynamicOps the well know creators of Cloud Automation Center. In July 2012 DynamicOps was acquired by VMware who has adopted Cloud Automation Center as a center piece for it’s vCloud Suite of products. Sid has helped dozens of fortune 100 and 500 enterprises successfully adopt both private and public cloud strategies as part of their IT offerings. The result of which was large operational and capital savings for his customers. Sid continues to help large enterprise customers reach their hybrid cloud strategies at VMware. On DailyHypervisor you will find exclusive content that will help you learn how to adopt a successful cloud strategy through the use of VMware Cloud Automation Center, Open Stack, and other industry recognized cloud solutions.

Latest Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...