|By JP Morgenthal||
|October 24, 2014 08:00 PM EDT||
Regardless if you’ve migrated multiple applications or this is your first migration to a public Infrastructure-as-a-Service (IaaS) you will want to run a small proof-of-concept to make sure that the basic elements of data flow operate as expected and your components will run in the IaaS environment. This week I spent some time experimenting with the three top IaaS offerings: Amazon AWS, Google Compute Cloud and Microsoft Azure. The architecture was relatively simple: three docker containers, one hosting a LAMP—Linux, Apache, MySQL & PHP—stack running WordPress, one hosting Postfix mail server forwarding all mail, and one hosting CVS. The results of the testing were informative.
Google only offers a limited number of Linux versions by default, one of them is not Ubuntu, so I was forced to use Red Hat Enterprise. Luckily, the only thing that had to change was how to install Docker. Once Docker was installed, I created an Ubuntu layer and I was able to run my container builds. Google clearly had the best network performance of all three vendors. This was clear in how quickly the containers were able to pull from the various repositories. However, the issues of deploying these containers in this environment were soon apparent. The LAMP stack included a Secure Shell (SSH) interface that the host machine would not allow me to bind my Docker container to. This problem could not be overcome without significant rework and was required to finish the install. I put that aside and continued onto the Postfix container. That’s when it got real frustrating as I learned I could not bind the container to port 25 (SMTP). That’s right folks, the Gmail people don’t want you using their platform to build a mail server, go figure.
There’s a reason why AWS is the leader in cloud services, their user interface was the most elegant for building out the server environment that I designed. Moreover, their t1.micro edition was perfect for doing the early testing work without incurring a lot of charges and when I was done I was able to create a snapshot of that server and use it as the formation of an m3.medium. I selected the Ubuntu 14.04 64-bit EBS-backed machine image to start with, which greatly reduced the amount of data that Docker had to pull in order to formulate the base images. Since I already hosted my existing WordPress and CVS repository on Amazon, I set up an Elastic IP and just moved the association back and forth to test against jpmorgenthal.com, this greatly reduced the headache of setting up WordPress and Postfix since they required the domain name.
Of note, I was having a heck of a time getting my Postfix container running. I could connect to the server using localhost, but could not connect from my home development machine. I removed all firewalls and confirmed that the security rules allowed port 25. I checked the issue on the Interwebs and found others having the same issue. There was a common belief that AWS was block port 25. In truth they are not, but I did find out they limit outbound port 25 calls in an attempt to make sure that customers don’t shoot themselves in the foot and get identified as spammers. The real issue was that Comcast blocks outbound port 25, which I discovered by connecting through another t1.micro instance telnet session that worked fine. The bigger issue here is what is the responsibility of the cloud service provider to protect the credibility of the whole as AWS is doing with outbound email? Is it really they are trying to protect their clients or is it that they have an outbound SMTP mailing service that they want customers to use?
Total cost for using a mix of t1.micro and m3.medium with 15 GB EBS with Elastic IP and multiple snapshots over a period of 9 hours was a whopping $1.65
Microsoft, like Google, has excellent network performance and the performance of their smallest class of virtual servers also completed the process of building the LAMP container in a reasonable amount of time. Their portal interface was very intuitive for creating the Ubuntu server and they offered the option of using a password in addition to a x.509 certificate, which was a handy option that was not offered with Google or AWS. As with Amazon, once the containers were instantiated they performed well and were accessible across all ports that were exposed on the network interface.
Where Azure falls short today is in their networking. They do not have an Elastic IP service like Amazon, which made it very difficult to switch between the current server and the test environment. This would not bode well for dev/test scenarios where it would be useful to have a single DNS entry for the testing scripts and then just point that entry at the current test environment. It seems, based on some limited web searching, that customers really want dynamic IP addressing on Azure and Microsoft has not responded to this requirement.
Some quick notes on Docker. I found that the most success I had was when starting with a Dockerfile and doing my own builds to bootstrap an environment. This way facilitates that all the necessary ports that need to be exposed are set up appropriately and its easier to inject a foreground script that will keep the container alive after it is started. This latter point is key. A daemon-ized container requires that something be continually running in the foreground to keep the container alive. This can be done with a while..do script command handed to /bin/sh, but it’s far more effective to use the startup script that ensures all the necessary services have started and then goes into a wait loop. Also, if you do changes to your container once its started e.g. via SSH, remember to commit the changes when you exit the session or you will be repeating those steps the next time you run the container.
Containers are not new, but renewed commitments to performance, flexibility, and agility have propelled them to the top of the agenda today. By working without the need for virtualization and its overhead, containers are seen as the perfect way to deploy apps and services across multiple clouds. Containers can handle anything from file types to operating systems and services, including microservices. What are microservices? Unlike what the name implies, microservices are not necessarily small,...
Sep. 3, 2015 08:00 AM EDT
The Internet of Things is in the early stages of mainstream deployment but it promises to unlock value and rapidly transform how organizations manage, operationalize, and monetize their assets. IoT is a complex structure of hardware, sensors, applications, analytics and devices that need to be able to communicate geographically and across all functions. Once the data is collected from numerous endpoints, the challenge then becomes converting it into actionable insight.
Sep. 3, 2015 07:45 AM EDT
Red Hat is investing in Tesora, the number one contributor to OpenStack Trove Database as a Service (DBaaS) also ranked among the top 20 companies contributing to OpenStack overall. Tesora, the company bringing OpenStack Trove Database as a Service (DBaaS) to the enterprise, has announced that Red Hat and others have invested in the company as a part of Tesora's latest funding round. The funding agreement expands on the ongoing collaboration between Tesora and Red Hat, which dates back to Febr...
Sep. 3, 2015 07:30 AM EDT Reads: 414
ElasticBox, the agile application delivery manager, announced freely available public boxes for the DevOps community. ElasticBox works with enterprises to help them deploy any application to any cloud. Public boxes are curated reference boxes that represent some of the most popular applications and tools for orchestrating deployments at scale. Boxes are an adaptive way to represent reusable infrastructure as components of code. Boxes contain scripts, variables, and metadata to automate proces...
Sep. 3, 2015 07:30 AM EDT
Enterprises can achieve rigorous IT security as well as improved DevOps practices and Cloud economics by taking a new, cloud-native approach to application delivery. Because the attack surface for cloud applications is dramatically different than for highly controlled data centers, a disciplined and multi-layered approach that spans all of your processes, staff, vendors and technologies is required. This may sound expensive and time consuming to achieve as you plan how to move selected applicati...
Sep. 3, 2015 07:30 AM EDT Reads: 122
In their Live Hack” presentation at 17th Cloud Expo, Stephen Coty and Paul Fletcher, Chief Security Evangelists at Alert Logic, will provide the audience with a chance to see a live demonstration of the common tools cyber attackers use to attack cloud and traditional IT systems. This “Live Hack” uses open source attack tools that are free and available for download by anybody. Attendees will learn where to find and how to operate these tools for the purpose of testing their own IT infrastructu...
Sep. 3, 2015 06:30 AM EDT Reads: 500
WebRTC services have already permeated corporate communications in the form of videoconferencing solutions. However, WebRTC has the potential of going beyond and catalyzing a new class of services providing more than calls with capabilities such as mass-scale real-time media broadcasting, enriched and augmented video, person-to-machine and machine-to-machine communications. In his session at @ThingsExpo, Luis Lopez, CEO of Kurento, will introduce the technologies required for implementing thes...
Sep. 3, 2015 06:00 AM EDT
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading in...
Sep. 3, 2015 05:15 AM EDT Reads: 2,018
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
Sep. 3, 2015 05:00 AM EDT Reads: 493
The 3rd International WebRTC Summit, to be held Nov. 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA, announces that its Call for Papers is now open. Topics include all aspects of improving IT delivery by eliminating waste through automated business models leveraging cloud technologies. WebRTC Summit is co-located with 15th International Cloud Expo, 6th International Big Data Expo, 3rd International DevOps Summit and 2nd Internet of @ThingsExpo. WebRTC (Web-based Real-Time Com...
Sep. 3, 2015 05:00 AM EDT Reads: 1,583
Organizations from small to large are increasingly adopting cloud solutions to deliver essential business services at a much lower cost. According to cyber security experts, the frequency and severity of cyber-attacks are on the rise, causing alarm to businesses and customers across a variety of industries. To defend against exploits like these, a company must adopt a comprehensive security defense strategy that is designed for their business. In 2015, organizations such as United Airlines, Sony...
Sep. 3, 2015 04:45 AM EDT Reads: 521
Through WebRTC, audio and video communications are being embedded more easily than ever into applications, helping carriers, enterprises and independent software vendors deliver greater functionality to their end users. With today’s business world increasingly focused on outcomes, users’ growing calls for ease of use, and businesses craving smarter, tighter integration, what’s the next step in delivering a richer, more immersive experience? That richer, more fully integrated experience comes ab...
Sep. 3, 2015 04:00 AM EDT Reads: 731
The 17th International Cloud Expo has announced that its Call for Papers is open. 17th International Cloud Expo, to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, APM, APIs, Microservices, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding bu...
Sep. 3, 2015 02:30 AM EDT Reads: 1,685
The 5th International DevOps Summit, co-located with 17th International Cloud Expo – being held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the ...
Sep. 3, 2015 01:00 AM EDT Reads: 1,642
SYS-CON Events announced today that the "Second Containers & Microservices Expo" will take place November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
Sep. 2, 2015 11:45 PM EDT Reads: 642