Expert unbiased reviews of the best Cloud Services

Nextbit Robin Smartphone with The Cloud Storage

On a Kickstarter, in just two days, raised funds to release of Android Smartphone called Robin, fundraising campaign will last 28 days. The success of the smartphone, the company created by the developers Nextbit, easy to explain on the device, not only by well-known personalities, who worked previously at Google, Apple, Amazon, and HTC, other than that Robin is different from the market for its neighbors that it "cloud".
Nextbit Robin Smartphone with The Cloud Storage

Nextbit Robin Smartphone with The Cloud Storage

The company was founded Nextbit Tom Moss and Mike Chang, previously part of the development team of Google Android, as well as Scott Kroyl, Former and Head of Design at HTC. These people definitely know how to do smart phones, and the influence of HTC chief designer clearly went for the benefit of the smartphone. The creators deliberately refused to use metal and black body design, so that Robin was bright and noticeable.
Nextbit Robin Smartphone with The Cloud Storage

Nextbit Robin Smartphone with The Cloud Storage

"Under the hood" of the device all the modern and fully complies with the latest technological trends of fashion. The device is powered by Qualcomm Snapdragon 808 processor, equipped with 3 GB of RAM, FullHD-screen diagonal of 5.2 inches, the main 13-megapixel camera, 5-megapixel front camera, fingerprint scanner, port USB Type-C and battery capacity of 2680 mAh.

Hot Topic : Amazon Offers to Transfer Data Suitcases

But the distinctive feature of the Robin, his "Clear". Robin "out of the box" is equipped with 32 GB of memory. Usually modern gadgets let you extend the storage capacity by installing mircoSD cards, but the developers of Nextbit decided to shift the task to the cloud. Each user Robin is allocated 100GB of space in the cloud, and when this place will end, representatives Nextbit promise to add more. Take money for it Nextbit not going to sell for a fee will only be additional, in excess of a certain limit, the space.

The smartphone will be on their own pass to the cloud not only photos and videos. There will also send applications and games, to which the user accesses is very rare. Icons such applications have changed their color to gray. Once the application is needed again - one touch of the icon, and the application is restored from the cloud, while retaining all settings and personal data. When working with a cloud on the case blinks specially designed for this diode.

Of course, the user will have the ability to customize of Robin, ask him to ban all limits and remove the cloud specific applications.

Fundraising campaign on Kickstarter continues. For the first thousand investors device will cost $ 299, against $ 399 in the next retail.

Amazon Web Services Cloud - Amazon Offers to Transfer Data Suitcases

Amazon Web Services CloudParadoxically, at this times, to bring cloud-based data from one data center to another or simply upload them to the cloud will quickly send the information by courier, on physical media, rather than a few days to wait for the download. Amazon Company is familiar with this problem and offers a solution.
Amazon Offers to Transfer Data Suitcases

Amazon Offers to Transfer Data Suitcases

Amazon has many years offers its clients a service of its own AWS Import / Export. If the company has a problem with the Internet, Amazon cloud data can be sent to physically but not on the network.

Skype Infrastructure to Windows Azure Cloud

Now the company has introduced a new Snowball device is a strange-looking, but very handy case for transport of data, up to 50 TB. The device is aimed at the transfer of information in the Amazon Web Services Cloud, so that its benefits can be assessed only Amazon customers, though large.

The Snowball, a secure case, weighing 22.6 kilograms, can accommodate up to 50 TB of data. This strange-looking "suitcase" withstand any strikes, bad weather, falling with acceleration 6G, it is resistant to cracking, and is equipped with everything you need, including a 10-gigabit network card and E-Ink display. The Snowball, if you wish, you can even just send in the mail, because in a way it is unlikely that something will happen to him.

Order Snowball customers can service through AWS Import / Export, its use will cost $ 200. The smelters data is given 10 days, over this period will be charged $ 15 per day. Amazon does not charge its customers the money for loading data into the cloud, but if you need to upload the information, it will cost $ 0.03 per 1 GB.

Skype Infrastructure to Windows Azure Cloud

Microsoft continues to migrate services to the Windows Azure platform. Next to "migrants" in queue, a VoIP-service Skype and SkyDrive file hosting. And, if to believe the corporate vice president Scott Guthrie , the movement of Skype is already a fait accomplish.
Skype Infrastructure to Windows Azure Cloud

Skype Infrastructure to Windows Azure Cloud

Scott Guthrie said that Microsoft is trying hard to make Amazon competition. To this end, the construction of new data centers, the number of servers in which more than one million pieces. Enhanced capacity and add functionality occurs every month. For example, only a few days ago announced the opening of a new Azure data center in São Paulo (Brazil). Ballmer said in the summer that they were able to beat Amazon in the number of servers, but in the cloud computing market is still no equal Amazon Web Services. However, the Redmond company does not give up.

Scott Guthrie

"We have more regions than at Amazon, we have offices in countries like China, which they do not - said Scott Guthrie. We have a globally replicated vault, if your data is stored in Northern Europe, you can configure automatic backup to the data center in Western Europe. Although the latter is a paid option instead of the default setting. Reserved Instances of this type, called "read-only secondaries" were just that.

Skype Infrastructure to Windows Azure Cloud

Microsoft itself is the main user's own cloud service. Transfer Skype infrastructure confirms it. As you know, part of supernodov located on the Microsoft servers in 2012. In September this year, Microsoft announced that every day, 50 billion minutes of Skype-to-Skype calls to pass through the Azure.

Read More: iPhone Quietly Stores All Call Logs to iCloud

Representatives of the company refrained from clarifying which specific part of the Skype infrastructure transferred to the cloud and the environment in which work supernody, ie a software platform. Previously, information was circulated that they were installed on the servers with the Grsecurity, secure Linux distribution.
Skype-to-Skype Calls to Pass Through the Azure Cloud

Skype-to-Skype Calls to Pass Through the Azure Cloud

So far, the main problem is the Azure cloud stability. Throughout 2013, there were two major shutdown cloud service, including one during the launch Xbox One. Skydrive hosting and mail services also went offline during this year.

iPhone Quietly Stores All Call Logs to iCloud

iPhone Quietly Stores All Call Logs to iCloud - Elcomsoft company specialist and a regular contributor to "Hacker" Oleg Afonin drew public attention to the problem, which are familiar, some iPhone users. The fact is that sync with iCloud can be a double-edged sword, and many users do not even know it. All data about the user's calls are automatically synchronized with iCloud, among other things, to remove them from possible upon request of law enforcement. In theory, access to them can also be obtained by third parties.
iPhone Quietly Stores All Call Logs to iCloud

iPhone Quietly Stores All Call Logs to iCloud

"On devices with iOS 8 and above your personal data is protected by your password. Apple can not retrieve the data at the request of law enforcement agencies on all iOS devices 8 and above, since all the files on them are protected by an encryption key, which is tied to the user's password, and Apple does not have them", explains Afonin.

However, the same can not be said about the iCloud, the same levels of protection do not apply to the cloud. According to the expert, cloud synchronization - a real gift for forensic and law enforcement agencies, because thanks to iCloud, they can get to the information that would be without clouds for them is simply unattainable.

Google Lifted The Veil of Secrecy Over It's Infrastructure

"The ability to remove the call logs from the cloud, rather than having to deal with complicated hardware protection of today's iPhone, it is a real boon for forensic" says the researcher.

For users, in turn, it can turn into a nightmare. The problem is not only that access to the data in the cloud to get much easier, but also in the fact that all synchronized data (including call logs and call the FaceTime, and data VoIP-applications such as WhatsApp, Skype, Viber or Facebook Messenger), visible for all devices with the same Apple ID, which is especially important for family members. On Apple's servers in the end turn out to be quite valuable meta data: phone number, date of calls, the data on their duration, information about missed calls, and so on.
Elcomsoft Phone Viewer

Elcomsoft Phone Viewer

The researcher says that the data is synchronized, without the user's knowledge, virtually in real time, as soon as the device can communicate with the iCloud. Moreover, this feature can not be disabled, you can not selectively refuse to synchronize the call logs, continuing to use iCloud. The only way - to turn off iCloud at all (family, users can also use different accounts).

This Afonin wrote that, despite Apple's defense, using Elcomsoft Phone Breaker tool to extract data from the cloud, it is sufficient to know your Apple ID and iCloud-user password. Also suitable authentication token from the computer of the victim or suspect, that will be enough for both law enforcement and hackers. Elcomsoft Edition of Forbes representatives told that they were able to extract information from the cloud, more than four months ago.

In Apple's warnings to the researchers reacted calmly and said that nothing to worry about. According to representatives of Apple's, the data in the cloud are protected not worse, than on the device, for access to them will still require Apple ID and password. The company advised users to create strong passwords and to use two-factor authentication.

Google Lifted The Veil of Secrecy Over It's Infrastructure

Google Security System - Google Lifted The Veil of Secrecy Over It's Infrastructure

Usually companies prefer to keep secret features of its security infrastructure that is there to protect data centers, Polga that disclosing such information could give an attacker an advantage. However, Google representatives are looking at this issue differently. For two reasons. Firstly, the publication of these reports allows potential users to Google Cloud Platform (GCP) to evaluate the safety of services. In the Second, Google experts are confident in their security systems.
Google Lifted The Veil of Secrecy Over It's Infrastructure

Google Lifted The Veil of Secrecy Over It's Infrastructure

Recently the company released the document Infrastructure, Security the Design Overview Profile (Review of infrastructure security model), where Google has described in some detail your defense mechanisms. Infrastructure is divided into six layers (including natural remedies), and ending with the deployment of services and user identities.

The first layer of protection - is a physical security systems that simply do not allow outsiders to get into data centers. This part of the report is similar to an excerpt from the script of the film "Mission Impossible": "We use multiple layers of physical security to protect our data centers. We use technologies such as biometric identification system, metal detectors, cameras, barriers, spanning the passage of vehicles, as well as laser intrusion detection system".

Interesting Fact About : Cradle of The Clouds

The next level of protection - iron. According Docs, Google absolutely does not allow the use of obsolete equipment in their data centers. Moreover, the company uses custom hardware from manufacturers that are pre-tested and validated thorough audit. Google also create their own means of hardware security: "We also create custom chips, including chips, which are hardware security tools that are currently used by our servers and peripheral equipment. These chips allow us to identify and authenticate legitimate devices on Google hardware. "

The third level of protection - cryptography, authentication and authorization systems that provide protection for communications between various Google services (no matter in one data center, they are located or not, all traffic is encrypted, both internal and external). "Google Server machines use various technologies to make sure that they are working with the correct software stack. We use cryptographic signatures for such low-level components such as the BIOS, bootloader, kernel and OS base image. These signatures are validated during each download or update. All components are designed and fully controlled by Google".

Google also pays special attention to protecting the drives and the system is designed to maximize complicate the "life" of potential malicious firmware, and not allow it to access the data. "We use our hardware encryption for hard drives and SSD, and closely monitor each drive lifecycle. Before the encrypted storage device will be charged and physically get out of our supervision, it will take a multi-step purification process that includes two independent reviews. Not past this device cleaning procedure is physically destroyed (shredded), it happens locally. "

In addition, the document describes the security measures that Google uses to protect its source code and find bugs in them. So, code review are divided into checks carried out manually or automatically. Manual code checks "team in which there are experts in the areas of web security, cryptography and security of operating systems." Often, the result of such assays are born fazzery new security library and for subsequent use in other products.

With regard to the source code, to protect them too fit with great responsibility, "Google Source codes are stored in a central repository, where you can conduct an audit of both current and past versions of the services. In addition, the infrastructure can be configured to request a service from a specific binaries, trusted and tested source. Such verification code must be reviewed and approved at least one other engineer, in addition to the code author. Furthermore, the system requires that any modification of code systems have been approved by the owner of the system. These requirements limit the ability of the insider or the offender, not allowing to make harmful changes to the source code, and create a forensic trail that can be traced from the service to its source. "

The document can be found a lot of interesting. For example, it was found that the virtual machines in the cloud Google work with the version of Custom hypervisor KVM. Google Developers even boasted and said that Google employees found most CVE and bugs in the Linux KVM hypervisor.

Cradle of The Clouds Computing

Step Guide to Deploying IaaS Based Service OpenNebula

Today with the subject Cradle of The Clouds Computing, more and more organizations, when the equipment is physically and morally obsolete, begin to think about the benefits of virtualization and maintaining two or three servers instead of the whole park. So I had to identify the most suitable solution to transfer part of the virtual world systems. As a result of research and experimentation chose the OpenNebula project.

About OpenNebula

The choice of technology and virtualization solutions today, very large, each vendor hypervisor offers dozens of additional add-ins that extend the basic functionality. As a result, to understand all this diversity and choose the most convenient and the best software is not easy.
Cradle of The Clouds

Cradle of The Clouds

After exploring the possibilities offered by such platforms opensorsnymi as Eucalyptus, OpenStack, CloudStack and OpenNebula, it was decided to stay at the last variant, and here's why. Settings Eucalyptus are not always clear, and require constant reading of the documentation, respectively, the probability of making a mistake is quite high. Is gaining momentum and a few more easy to setup OpenStack only by the summer of 2012 figured license (now exclusively Apache License), but the project did not leave the impression of holistic solutions. Probably, because initially, any element can be replaced by another. The result is a good puzzle, which can be folded, but will have to try to earn as required. Developed under the wing of Apache CloudStack fourth version seemed very interesting. In any case, installation is relatively simple, but the configuration process is quite logical. But on closer acquaintance revealed that some settings in the interface CloudStack Managment Server is not available, and try something to improve staffing tool hypervisor is not always interpreted correctly. And so it came to OpenNebula , which impressed moderately simple, logical and functional.

Read More About:

Amazon Increases Data Security in The Cloud

The project started in 2005 as a research and development, supported by the community and initially distributed with open source (Apache License). The development is sponsored by a large number of organizations, including CERN, FermiLab, China Mobile, the European Space Agency and others. OpenNebula is an open and extensible platform automation of the data center, you can deploy on existing servers, public, private or hybrid the IaaS, functionally similar to Amazon EC2. On the physical server and the cluster can simultaneously use different hypervisors, now is Xen, KVM, and the VMware, as the guest will run any of the supported hypervisors OS. An interface to Amazon EC2, supported API - EC2 Query, OGF OCCI, vCloud and his. The modular architecture allows you to integrate OpenNebula with any virtualization platform, data warehouse manager or management. It supports all the typical clouds technologies and features, including Live Migration (virtual machine can be easily moved to another server).

OpenNebula kernel is written in C ++, management tools - in Ruby and shell. All releases are named after the stellar nebulae.

Infrastructure OpenNebula

Manage OpenNebula infrastructure with the management server, the so-called frontend that can run on Linux or OS X. The communication between the control computer and the cloud cluster nodes occurs via SSH. To store the settings OpenNebula uses MySQL or SQLite database. Implemented image management drives, hot-plug, templates repository, managing the entire lifecycle of VM (creating, cloning, and so on) and Account (user, group, role). Storage subsystem disk images supports several SAN storage and NAS. Access to images from any node in the cluster is organized under the protocols SSH, NFS, SFTP, HTTP, GlusterFS, Lustre, iSCSI / LVM. Virtual networks are created in the Virtual Network Manager, which provides the right level of abstraction and isolation, supported by several technologies: dummy, iptables, ebtables, Open vSwitch, 802.1Q VLAN and VMware.
Cradle of The Clouds

Cradle of The Clouds

For easy management and virtual accounts using multiple levels of abstraction. Physical servers are clustered, which already can distribute and balance loads. Several OpenNebula units are combined into zones (oZones), access to which is organized through abstract data center that contains its own set of resources and accounts. Also used the concept of groups, each with individual settings and the set of available resources that do not intersect with the other.

Cloud OpenNebula can share a few organizations or groups of users to the delegation of authority and setting quotas. It supports several types of accounts with their privileges. All this allows us to create a flexible and adaptable managed infrastructure, in which all have access only to the allotted resources and management functions.

In fact for the administration of physical and virtual systems offered command-line utilities (start at one * - onevm, onehost, oneuser and so on) and OpenNebula Sunstone web interfaces (administration cloud-environments) and OpenNebula Zones (Zone Control).

Features web console allows the VM to deploy, connect to the VNC for him, manage storage, imaging, networking, and so on. Developers offered service OpenNebula Marketplace, allowing easy to install pre-configured virtual environment, prepared by the project. To monitor servers work in Sunstone integrated system Ganglia . In addition, OpenNebula opportunities are extended using modules and add-ons. For example, OpenNebulaApps is another layer that allows you to build on the basis of OpenNebula PaaS (Platform as a Service).

In OpenNebula MarketPlace available ready virtual machine images
In OpenNebula MarketPlace available ready virtual machine images
Regular users manage their system through a web portal OpenNebula Self-Service or via console commands (occi- *), which are essentially add-on interface OCCI (Open Cloud Computing Interface). Web Console presently localized part, although problems in using this causes.


Software OpenNebula is put only on the management server.
OpenNebula installation in Ubuntu 12.04 LTS
At the time of this writing the latest version is 3.8.3 (Twin Jet), released in January 2013. In this release, new VMware drivers, various improvements in EC2- and OCCI-interface, VM state management, the KVM hypervisor and more. The project offers a set of images for virtual machines Cloud Sandbox, allowing to quickly deploy the management server, evaluate the performance of and access to demooblaku. Prepared packages for self-installation on x64-version of Ubuntu, Debian, openSUSE, and RHEL / CentOS. Can be installed from source for other Linux distributions.

The required packages are available in the official repositories distributions, but, as a rule, their version of the much delayed. For example, Ubuntu is now the situation is as follows:

$ sudo apt-cache show opennebula | grep -i version  Version: 3.2.1-2
The process of deploying the management server does not seem confusing, and with proper care no surprises there. Set your own Ubuntu and hypervisors will not paint, these issues have already been well covered.

Storage of images in OpenNebula is in the directory / var / lib / one / images. It is better to take a separate partition or drive under it, to later do not worry about availability. The names of all the nodes must be resolved through DNS.

For network management on hosts, we need a bridge. Put the package bridge-utils-and the OpenSSH server, which is required for the remote control:

$ sudo apt-get install bridge-utils openssh-server
Now we set up.

$ sudo nano /etc/network/interfaces  ... auto br0 iface br0 inet static address netmask network broadcast gateway  bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off
If multiple network cards, the settings for each are similar. Restart the network service and check:

$ sudo service networking restart $ brctl show br0 8000.000c2959428e no eth0 ...
Put the packages needed for OpenNebula:

$ sudo apt-get install build-essential cgroup-lite cracklib-runtime curl dpkg-dev ebtables g++ g++-4.6 libalgorithm-diff-perl libalgorithm-diff-xs-perl libalgorithm-merge-perl libapparmor1 libcrack2 libdpkg-perl libmysqlclient18 libnuma1 libpq5 libreadline5 libruby1.8 libstdc++6-4.6-dev libvirt-bin libvirt0 libxenstore3.0 libxml2-utils libxmlrpc-c++4 libxmlrpc-core-c3 mysql-common ruby ruby-daemons ruby-eventmachine ruby-json ruby-mysql ruby-nokogiri ruby-password ruby-pg ruby-rack ruby-sequel ruby-sequel-pg ruby-sinatra ruby-sqlite3 ruby-termios ruby-tilt ruby1.8 ruby1.8-dev rubygems thin thin1.8
The project is developed, each new version requires dependencies, so the list is not final and is constantly changing. You can do otherwise: OpenNebula first install the packages, and then perform a «apt-get install -f».

Set up NFS, which is used for distribution of images and settings.

$ sudo nano /etc/exports  /var/lib/one,sync,no_subtree_check,no_root_squash,anonuid=10000,anongid=10000)
Do not forget to restart the server:

$ sudo service nfs-kernel-server start
Ready to install OpenNebula. Go to, select and download the package under Ubuntu 12.04. Inside archive several deb-packages (previously used a single file), put it all:

$ tar xzvf Ubuntu-12.04-opennebula-3.8.3.tar.gz $ cd opennebula-3.8.3 $ sudo dpkg -i *.deb
For work, we need the MySQL database:

$ sudo apt-get install mysql-server $ mysql -u root -p mysql> CREATE USER 'oneadmin'@'localhost' IDENTIFIED BY 'oneadmin'; mysql> CREATE DATABASE opennebula; mysql> GRANT ALL PRIVILEGES ON opennebula.* TO 'oneadmin' IDENTIFIED BY 'oneadmin'; mysql> quit;
server settings are stored in /etc/one/oned.conf, it contains a lot of different options, but for now we are interested in connecting to the database:

$ sudo nano /etc/one/oned.conf DB = [ backend = "mysql", server = "localhost", port = 3306, user = "oneadmin", passwd = "oneadmin", db_name = "opennebula" ]
Setting up the environment
During the installation of the deb-package account is created and the group oneadmin, on whose behalf and will processes are running. Also, some commands in the console should only be given on behalf of oneadmin. On the management server for the convenience of a better set a password «sudo passwd oneadmin», on hosts can be used «sudo -u oneadmin».

A similar uchetku need to create and the rest of the hosts, the ID should be the same everywhere. check:

$ id oneadmin  uid=117(oneadmin) gid=111(cloud) groups=111(cloud),129(libvirtd),130(kvm)
Create a remote server group and uchetku oneadmin, generate the key and leave the default settings:

$ sudo groupadd -g 111 oneadmin $ sudo useradd -u 117 -m oneadmin -d /var/lib/one/ -s /bin/bash -g oneadmin $ sudo -u oneadmin ssh-keygen
To manage the server can connect to hosts via SSH without a password, we copy keys and create ~ / .ssh / config.

$ su oneadmin $ cat ~/.ssh/ >> ~/.ssh/authorized_keys $ nano ~/.ssh/config  Host * StrictHostKeyChecking no
Then copy the directory /var/lib/one/.ssh on each node and connects to check that the connection is made without requiring a password. authorization files should be located in a subdirectory of ~ / .one, it is necessary to create and fill manually.

$ mkdir ~/.one
The password is stored in clear text in ~ / .one / one_auth file:

$ echo "oneadmin:p@ssw0rd" > ~/.one/one_auth $ chmod 600 ~/.one/one_auth
During installation packages using the password is automatically generated, it is necessary to redefine the utility oneuser:

$ oneuser passwd 0 p@ssw0rd
The argument "0" specifies the user ID for oneadmin it is equal to 0. The settings are finished, restart the server:

$ su oneadmin $ one stop $ one start
To make sure that authentication is configured correctly, it is better to execute any console command. If the output contains no errors, then everything is done correctly. For example, the list of users:

$ oneuser list  ID NAME GROUP AUTH VMS MEMORY CPU 0 oneadmin oneadmin core - - - 1 serveradmin oneadmin server_c - - -
In case of problems, we see what the magazine writes. Introducing «cat /var/log/one/oned.log», get a listing of the loaded server settings from config and reporting elements start. Also netstat -ant command should show that the port 2633 is open.

Setting Sunstone and Self-Service web console

As already stated, all operations can be performed using the command line tools or web console. The second method more evident and easy to use. Sunstone settings according to /etc/one/sunstone-server.conf, by default only listen to localhost, so the rules:

$ sudo nano /etc/one/sunstone-server.conf  :host: :port: 9869
The rest can not yet touch. Run:

$ su oneadmin $ sunstone-server start
As can be seen from the config, Sunstone uses port 9869, and connects it to the browser, by entering for registration oneadmin data.

Sunstone interface is simple to understand the basic settings will be easy. All settings are divided into five groups:

  • Dashboard - general statistics;
  • System - setup user accounts, groups, and ACL;
  • Virtual Resources - virtual machine images and templates of images;
  • Infrastructure - setting nodes, a virtual network, storage systems and clusters;
  • Marketplace - access to the prepared virtual machine images.
  • By default, anonymous access to Sunstone Marketplace, but if you already have your account in the Marketplace, the data should be included in sunstone-server.conf.

Creating a virtual network in OpenNebula Sunstone
Creating a virtual network in OpenNebula Sunstone
User access to the VM service organized by OCCI, whose settings are /etc/one/occi-server.conf. By default, it also allowed access only from localhost, change:

$ sudo nano /etc/one/occi-server.conf  :host: :port: 4567
The rest can be left alone. Run:

$ occi-server start
Opening the browser on the page obtain access to OpenNebula Self-Service Web Interface, its features are very similar to Sunstone, but of course their little smaller, and they are easier. Also, you can manage your VM using a set of tools occi- *. For example, we get a list of presets:

$ occi-instance-type list
For ordinary users is OpenNebula Self-Service
For ordinary users is OpenNebula Self-Service
Similarly, the service is activated OpenNebula Zones, which are registered in the configuration /etc/one/ozones-server.conf. In addition, the need to create a database ozones. Then run ozones-server start. By default, the connection is made to port 6121.

Virtualization vs Cloud

Today, the fashion, the term "cloud". Despite his promoted, many do not see the fundamental difference between virtualization and cloud do not understand where one ends and another begins. In fact, virtualization - a hypervisor and a set of management tools, each developer prefers to maintain only its own hypervisor, not allowing others to manage. Sam set is designed for the engineer, not the average user.

Platform for building cloud provides an additional level of abstraction, allowing you to not be tied to the physical hardware and the use of a single infrastructure different hypervisors. This approach simplifies the management of large arrays and allows to allocate resources as necessary. And most importantly, ordinary users can participate in the process.

VM Placement Policies

Planner OpenNebula when placing a new VM facility management policies (Data Center Placement Policies), which are specified in /etc/oned/sched.conf. The default packing policy that requires a minimal number of servers that provides minimal fragmentation (variable RUNNING_VMS). striping Policy (-RUNNING_VMS) indicates a uniform distribution of the available VM Servers, it ensures maximum available amount of resources. Installation of load-aware point to the need for VM hosting on a server with a minimum load (FREE_CPU). custom to accommodate policy uses the calculated weight (rank), which the administrator can set yourself (default RUNNING_VMS formula * 50 + FREE_CPU).

Setting VM placement policies sched.conf


Familiarity with project OpenNebula has shown that it is a simple and convenient tool for deploying IaaS-service with the possibility of delegating powers to different users. The settings of each component OpenNebula well documented, so if you have problems to find a reason not to be difficult.

Popular Posts