Archive

Archive for the ‘Cloud Hosting’ Category

Know the Right Difference between Cloud & Virtualization

January 24th, 2017 Comments off

Virtualisation and Cloud Computing always go hand in hand. Most of the IT decision makers are confused between these two terms. The terms Virtualisation and Cloud Computing are often used interchangeably. Even though, both the technologies are similar, they cannot be interchanged, difference being substantial enough to affect your business decisions. Let’s understand the scope of these two technologies and their benefits

How cloud computing is different from Virtualisation?

The process of creating a virtual environment is virtualization. It helps in separating physical infrastructure in order to create multiple dedicated resources. It enables a user to run several operating systems simultaneously on one computer. Virtualization is the vital technology powering cloud computing. But, virtualization is not cloud computing. Cloud Computing is the delivery of shared resources through the internet on demand, assembled on top of a virtualized infrastructure having an automated control consisting of computes, network and storage constituents.

System virtualization is creating multiple virtual systems in a single physical system. It is usually deployed with a hypervisor technology which is firmware or software components which have the ability to virtualize system resources. There are many hypervisors like VMware, Hyper-V, Xen that enable establish virtualization within cloud.

The main difference between the two concepts is that manipulation of the hardware is done by virtualization whereas cloud computing is the service that is derived from that manipulation. Cloud computing and virtualization are used together to deliver multiple services, especially in building a private cloud infrastructure. Each technology will be deployed separately for most small enterprises in order to achieve quantifiable benefits. In multiple ways, your capital expenditure on equipment is drastically cut down and you can get the maximum benefit out of it. In other words, cloud delivers elasticity, self-service ability, pay as you go service and scalability that is not innate in virtualization. Virtualization can exist without cloud computing, but clout cannot happen without virtualization.

Advantages of virtualization and cloud computing

With virtualization fewer servers are bought and maintained where server’s capacity is utilized at its best compared to non-virtualised servers. Virtual machines operate their own OS and enterprise applications your business requires. Whereas, cloud computing is a technology which is accessed via internet rather than deploying it on organization’s network. One can choose from different cloud-based solutions and cloud providers to meet their business requirements. Enterprise grade applications like CRM, hosted voice over IP, off-site storage can be deployed with cloud computing, cost of which will easily fit into the budgets of small businesses.

Virtualisation has enabled deploying less number of hardware to perform the same amount of work. As power usage is increased, physical infrastructure efficiency is enhanced. With cloud, operation or storage needs can be changed enabling flexibility which suits your situation. Scalability is the part and parcel of cloud deployments. Automatically the cloud instances are deployed as and when required making business cloud hosting the perfect solution for enterprises.

Virtualization comprises of high level of data centers, so ultimately a virtualised environment will have less redundancy. With cloud, one can have unlimited storage capacity. Now, you don’t have to deploy extra devices to increase storage space.

Downtime is minimized during maintenance periods with virtualization whereby alterations can be performed on one server without affecting others and maintenance can be performed without causing disruptions. Cloud Backup and recovery has made the process of backing up and recovery of the data much easier and simpler. Flexible recovery and backup solutions are offered by cloud.

With Virtualisation, virtual machines provide security requirements for organizations by means of duplicating the required level of device privacy or resource that comes with hard wired devices. Easy Deployment is facilitated by cloud computing making the entire system completely functional within couple of minutes.

Which one is better?

After having a complete understanding about both the terms, the next step is identifying which one is better for your business needs.

If you wish to have outsourced IT, cloud will be the best solution for you as it frees up internal IT resources for supporting higher business value enabling you to invest your IT budget towards activities that can advance your business.

If you want to reduce the number of appliances, servers and wish to have one solution for all your needs, go with cloud. Even software perpetual licenses are eliminated when such a solution is deployed. Again, if you are looking for a flexible and scalable option, cloud will be your best friend. IT capacity can be temporarily scaled by off-loading great demand compute necessities to a third party. And thus, you pay only for what you have consumed and that too at the time when you require the resources.

Conclusion

Both cloud computing and virtualization function on a one to many model. With virtualization one computer can perform like many computers performing. Whereas, cloud computing enables many enterprises accessing one application. The difference is how your business deploys them. So, which one is best for you?

Transformation of Commerce with Internet of Things

December 26th, 2016 Comments off

Even the most inane technology is becoming smart day by day starting from fitness trackers to coffee makers. This is just the beginning. The Internet of Things (IoT) is changing everything, changing the way consumers are ordering coffee filters to manage home security. The very material of commerce is revolutionizing.

As mentioned earlier, it is just the beginning. IoT is still in its egg shell and will take the time to completely come out and enter the evolution. Gartner predicts that till the time we reach 2018, hardly people will be prevailing IoT network. In the coming years down the line, we can expect the multiplying number of different products, different devices, and methods to IoT.  According to Cisco, at the end of 2020, there will be 50 million connected devices.

With all this developing, the whole commerce sector is going to revamp itself with the benefits of IoT along with the increased adoption of cloud server hosting. But, merchants and vendors need to consider certain things. Let’s review them one by one

Supply replacement

Supply replacement is the only natural extension of IoT.  For example, the connected devices are programmed in such a way to store at least half-gallon of milk at the starting of the week, the measurement of the milk consumption will be measured by the refrigerator. If the milk dips below a half gallon, a gallon of milk will be added by the refrigerator to the normal grocery store order for Sunday deliveries. But talking about supply replenishment with IoT, something similar is just the beginning and more is yet to come.  In the coming years down the line, you will not have to go at the grocery store to pick up more toothpaste if they are out of stock.

Purchasing Development

Purchasing development is one step beyond supply replenishment. With this consideration, an entire host of choices in consumables can be altered starting from the size of the items to quantities which are purchased from consumer’s point of view.  For example, kitchen devices which are internet connected should inform consumers that instead of going with a 16-oz bottle they should go for 32-oz of olive oil bottle without it going bad and save money.

Purchase development will be performed by IoT which most of us are aware of but never could spare time to give concentrated focus on it. Smart products will have the ability of optimizing sizing, price and also ascertain the best places to buy products. Moreover, with time IoT evolution is much required to cope with the changing technology and make smart decisions.

Product Procurement

Product procurement will enable the products to make the shopping decisions, deciding with an insignificant human intervention which other products to buy. Every individual is not capable of having the tolerance to handle products deciding the purchase and service decision for people, but with IoT, everything can be possible.

IoT can be nascent as of now, but within some years connected and smart products will become more sophisticated, more and more investment will be made in connected products in professional as well as personal life. To cope up with the rapid change, product information should take steps to meet the consumer’s expectations and demands who wish to have best products and services for their requirements. Otherwise, consumers will be left behind with an impaired system leaving them lagging behind.

IoT is completely going to become a revolution and change the purchasing pattern of consumers where just one reminder will be enough to optimize the energy consumption at home.

Witness the Revolution of Cloud Architecture

December 23rd, 2016 Comments off

Cloud is no more the cherry on the cake. It is the whole cake that more and more organizations are interested in. Cloud is not an evolution of existing technology, but an actual revolution that has taken place in the world of technology.  Revolutions change the perception and redefine the meanings. Though cloud has evolved since years but it is today that organizations understand the real value of deploying cloud. However, with many advancements made in the cloud technologies, cloud architecture itself has evolved.

With cloud computing, everyone can manage to develop a service with just a penny of investment. Though, the cloud architecture should also scale accordingly. With the changes in the technology and cloud server hosting, it is necessary to keep a pace with the technological changes and adopt new technology to survive in the rat race. Let’s see how the architecture of cloud has revolved with time

Commodity Hardware instead of high-end Hardware

With time this has changed a lot. Some largest cloud service providers use commodity hardware. This commodity hardware is very much prone to breakdowns compared to traditional environments. However, this will not prevent you from using cloud for company applications with high accessibility and high performance needs. As per different resiliency criteria and distribution, it is essential to restructure the cloud architecture.

Dynamic Scaling

One of the essential characteristics of an excellent architecture is scalability. Scaling of architecture is altogether completely different as you need to take into consideration budget allocation, planning, systems reconfigurations, hardware purchase etc. Very steady architectures can be implemented after a period of time and maximum load can be sized which is able to support the service or application.

When we are talking about cloud, resources utilized in cloud are dematerialized and on demand requests are initiated. A high level of abstraction of hardware resources can be managed by Application Program Interface (API) so that provisioning and de-provisioning of resources can be automated. Architecture so deployed should be flexible and adaptable as per the demands and should possess the ability to expand as and when required.

Resiliency in terms of Availability

“Resources are anytime available” is an assumption which is believed in traditional architecture. On the other hand, inaccessibility of resources is an anomaly. It is neither planned nor is it a desirable event. However, in cloud the question of unavailability of resources does not arise as commodity hardware is used. Transient failures come now and then and the architecture so deployed should have the ability to handle such failures. In case of uncertainties, it should be programmed to respond properly and the transaction should be completed. In short, it should be resilient enough. In progressive cloud computing architectures, to create deliberate failures, some agents have been deployed to carry out this activity in production environments to make sure applications respond properly. To gain resilience, a system of queues should be implemented so that the components of the application is coupled loosely and is autonomous.

Distribution and Decomposition of Performance

Technology has some inherent performance constraints like commodity hardware, resource sharing or non-proximity of resources, cloud architectures being sensitive to it. Methods which were earlier deployed to release the stress of the resources such as increasing the bandwidth, number if IOPS, size of the RAM or the processor speed are no longer effective. Scaling up in cloud is very limited and sometimes unmanageable. Instead of focusing on strengthening a single unit of architecture, decomposing the architecture in multiple modules established across various nodes should be given due importance.

If by increasing the nodes and requests the performance remains stable, then and only then the system is considered scalable. This scenario was prevalent in traditional architecture. Scalability is termed as a function of performance. Today, it is reverse of the ratio and the scalability should be scaled up to achieve better performance results. To achieve better performance, computational cost should be subdivided. This approach should be applied on application layer as well as on data layer by sharing of database which enables distribution of the database and then ultimately the load on multiple nodes take care of managing specific portions data. If cache systems are used wisely at all levels, the performance of the architecture can be improved certainly.

MTTR instead of MTBF in Reliability

The reliability of architecture is measured with the concept of MTBF i.e. Mean time between failures. In other words, the reliability of a system depends upon how extensive the period is. However, this can’t be achieved because of commodity servers. In that sense to review the concept, connect reliability to resiliency. Cloud architecture is reliable when you achieve a lower MTTR which means Mean Time to Repair. If MTTR is zero or close to zero, this ensures that the organization has a reliable architecture and is resilient to any failure.

Capacity Planning in terms of Scale Unit Planning

In traditional architecture, capacity planning was designed in order to estimate the sizing which is needed to handle the maximum load so that the system has the capability to react proactively taking into consideration the worst situations as well. Resources are wasted for the most part of the system’s life cycle. On the other hand, in cloud architecture the capacity planning is enabled to regulate the scale unit that is scaling up and down of the load, well may be automatically.

Fundamental concepts of the architecture which are capacity planning, scalability, reliability, performance are profoundly revised.  We have witnessed how cloud architectures have diverged from everything established in past as it is inevitable to respond to changing scenarios. .  Two golden rules of properly designed cloud architecture are “Enable Scaling” and “Expect Failure”. Half success is already achieved if cloud architecture is well designed and implemented.

 

 

 

 

 

 

 

 

DevOps Evaluation for Your Organization

December 20th, 2016 Comments off

Today we are in a world where we are surrounded by cloud and DevOps. Implementing DevOps is not a simple task.  Before implementing DevOps, an assessment should be carried out which will clearly tell if the organization will benefit from the changes or no. Even if trivial incremental changes are witnessed with DevOps it signifies that organizations can test new ideas with more risks.

DevOps has been successful in speeding up software development and restructuring the involvement of quality assurance and operations teams.  Moreover, it enforces demands which every business must be capable enough to address.  Go through the below mentioned steps to undertake an accurate DevOps evaluation.

Is value added to the businesses with DevOps?

While implementing anything, make sure long term returns are considered. Traditional software development tools take more than considerable time for coding and testing before anything is perfect for distribution. However, challenges are faced in the form of oversights and a defect in the release not overlooking competition adversely affects the return on investment.

The complete picture of software development is changed by DevOps.  This is done by deploying shorter and smaller development cycles. Operations, quality assurance staff, developers relentlessly work on product’s constant release pipeline. Each and every release is a value addition to the characteristics and functionality of the product.  Sidestepping the risky investment period, it travels in market faster.

Is IT flexible enough to support DevOps?

The IT enterprise must install every small software release. That is, the new version of the software is installed on one or more servers in the cloud or data center and interlocking supporting storage, performance monitoring, databases as well as other resources. All these activities are core responsibilities of IT decision makers.

To deploy and provision a usual application which is done by following a traditional soiled process takes months and months of time. Specific requisitions, approval of new servers, application requirements, acquiring OS, installing any new system, acquiring software licenses’, performing the deployment of the approved system is determined by the IT organization.  While all this is carried out, there is very less interaction with the developers.

When IT employees work on rare software releases then such rigid processes work well.  A new development cycle should not be the only reason triggering the move to DevOps. While assessing DevOps, one should also consider how the IT team is functioning if sustained as-is and how it will change. IT people work closely with QA staff and developers in a different manner on DevOps to offer computing, networking and storage resources on a much faster pace to test each release.

Is the enterprise big enough for DevOps?

DevOps will function effectively only when a cyclical pipeline of development exists. If there are any gaps in the pipeline then it will leave the employees idle. To effectively deploy DevOps, the enterprise should be large enough to support the tools, processes that will enable DevOps to be productive. Balancing project demands and staff can initially be challenging for small enterprises. Application development is often sub contracted by small enterprises.

Maintaining and deploying a DevOps pipeline is done by large enterprises that has the capability to attain and adjust employee levels to achieve project timeline hassles. Organizations having 250 to 1000 employees are in a better position to deploy DevOps process. Organizations having more than 1000 employees will have the scaling ability to leverage DevOps strategy.

Is the organization aware about its DevOps strategy?

DevOps is not a single task. With single software you cannot successfully deploy DevOps. DevOps is an entire procedure of processes, tools and people. For effectively deploying DevOps you require clever developers, insistent testers and expert operations workforces. Collaboration, Workflow and automation with related tools are required. The organization must have flexible and dynamic business processes to completely eliminate traditional silos and multiple teams operating together. Having all these elements set in place, DevOps radically triggers deployment cycles and software development process bringing in tangible welfares.

The method of deploying DevOps varies as per different companies. Adjusting tools, people, processes to achieve organization’s goals. It means taking appointing developers who very well know DevOps cycles and workflows, deploying a suite of tools which will enhance the collaboration between QA, IT and developers and adopting business leadership which will drive DevOps deployment.

Will the company promise constant change?

Deploying DevOps is not a one-time activity. Organizations should not be resistant to change as deploying DevOps requires changing as per changes in business environment, technological advancements as well as changing user expectations. After making the initial DevOps step, new development language will be adopted, migration to another collaboration platform and DevOps workflow, upgrading servers, migrating to public cloud hosting or implementing a private cloud, getting familiar with business environments.

 

Is DevOps culture really important?

The three pillars of DevOps are process, technology and culture. More and more focus is given to culture because if culture is evolved, it will naturally support technology and process for its operations. For a successful Devops implementation, culture is really important. Difficulties will be faced while evolving an organization’s culture, but it’s worth it. However, many organizations believe that DevOps is not actionable, it is just automation, cultural change is not effective etc. This all is a misconception about the term DevOps. DevOps basically is collaboration between operations team and developers. DevOps is not at all concerned with technology, processes or culture but with DevOps, all these things can be improved.

A DevOps culture contributes significantly to the growth of the organization. Noteworthy contribution is made in the quality assurance area in information system which is a link between operations, development and customer support teams with clients. Moreover, it is advantageous in SMF- Service Management framework as more and more services rely on the collaboration between Dev and Ops members. Information System Development is one such area where major changes are witnessed reducing the gap between operations, developers and consumers enabling problem detection a bit earlier.

In short, Devops benefits supporting a culture of cooperation, supporting automation, enabling sharing, optimum usage of services, improvements in quality assurance and solving issues related to standards and structures.

DevOps is an ongoing concern which requires adjustments and optimizing resources. After assessing and following the above mentioned steps, carry out the process of deploying DevOps.

 

 

Four Phases of Cloud Security

November 16th, 2016 Comments off

four-phases-of-cloud-security

A captivating business model is offered by Iaas (Infrastructure as a Service) and that is why its adoption is increasing in all kinds of organizations. While implementing IaaS, whether it a hybrid cloud or public cloud strategy, one should comprehend the security challenges that can be encountered which can help them to overcome hurdles associated with protection of cloud-based operations and business.

Along with the advantages that IaaS offers, it is also a web of challenges as organization’s resources are stored in shared public data centers which are remotely accessible on unsecured networks. Nowadays, hackers are making use of cloud occurrences to make an attempt of malicious attacks. A fact that can’t be ignored is if businesses don’t have concrete security infrastructures, the cloud environment is vulnerable to many numbers of threats and attacks.

Many security measures and tools are available in the market but security needs vary with different organizations. There are 4 stages of cloud security ranging from simple to complex which should be deployed by every organization. Let us review the challenges that organizations face in each stage of cloud complexity.

Stage 1 – Security Best Practice in Cloud

Organizations which are into the first stage of cloud complexity are using IaaS on a somewhat low scale in a single, simple data center alignment. Cloud infrastructure strategy giving access to a specific cloud infrastructure which was earlier available only to large companies is deployed by organizations to diminish computing costs and thereby improve efficiency. But such organizations lack in-house IT security expertise and thus cannot address the requirements of security required on their own. Such companies can avail solutions that bundle the best security practices. Best security practices include identity-based access policies, secure remote access, firewall etc.). Such solutions should be offered as a service.

Stage 2 –Automate Security & Scale

In this stage of complexity, organizations which have pretty good experience in the cloud as well as have good in-house security proficiencies fit here. Such organizations adopting security solution externally is not because they lack in-house knowledge or skills but because you require more than just a manual configuration. A manual configuration does not scale up in the same stride as cloud computing power scales. In such situations, automating security helps the on-premise team with enormous and dynamic cloud usage.

The company should be able to respond in situations when a number of virtual servers fluctuate. E-commerce portals are best examples. During holidays and seasonal peaks, there is traffic spike whereby the company should be able to add servers without worrying about the downtime. Manually doing such tasks can have human errors or in worst cases, resources will fall short at peak times causing loss of sales and revenues. In order to protect organization’s network, automated security scaling should be deployed irrespective of the number of servers used at any specific time.

Remote secure access is another point of concern which organizations at this level of security have to deal with. If organization’s employees are working somewhere remotely and want access to cloud servers, requirements should be established properly ensuring the identity is verified, secured connections are established and no risk is associated with data in motion.

Stage 3 – Multi-location, Multi-Cloud Deployments

This stage of complexity comprises of companies that are too much dependent on the cloud. Several data centers are deployed by such companies, sometimes on more than one cloud or a hybrid environment is deployed maximizing potential of the cloud. Securing multi-location environments while reaping the benefits of the cloud gives rise to another layer of complexity.

For small enterprises, how can they securely connect cloud computational resources on more than one data centers? For effective deployments, it is inevitable for data to travel across several data centers than resources being accessible. Data is traveling around everywhere, though its organization’s responsibility to secure the path. Nonetheless, even if automation is deployed, in such complex scenarios and levels the risk of blunders is quite high.

Organizations should consider multiple locations, diverse infrastructures with diverse security configurations for data traveling across. It is necessary for a security layer to be implemented defining and implementing network-wide policies across different infrastructures.  Organizations can then configure security policies as per network-wide policies across various data centers and deployments irrespective of the number of locations and physical locations.

Stage 4 – Compliance with Security Regulations

Companies which have to comply with regulations of external security like PCI compliance for enterprises taking care of credit card information so far in this 4th level of complexity. It is necessary for such organizations to comply with such regulations. In a failure to comply, various civil penalties can be levied upon, inclusive of exorbitant fines and chances of imprisonment.

Note that fully compliant infrastructure services are not provided by any of the public cloud service providers. In this stage, the companies have to meet the government rules and regulations for implementing additional security. Some of the compliances are:

  • File and configuration integrity checks
  • Encryption of data-in-motion
  • Complete access logs for servers holding sensitive data
  • Identity-based access management and control

Organizations coming under all four stages have to deploy the extra layer of security with the basic infrastructure bridging the gap between in-house capabilities and network security needs. One of the important aspects of any security infrastructure which has the capability to scale as and when the business expands. The first three stages can be treated as a natural evolution of growth and development. A right security solution is one which evolves naturally through the three steps without any distraction. Moreover, a security solution should allow a company to go beyond a single data center enabling deployment across multiple data centers, various infrastructures and across several clouds. And at the end, right security features should be in accordance with the level of data sensitivity and regulations a company holds.

 

 

 

Cloud SLA: Points to Check!

October 14th, 2016 Comments off

cloud-sla-points-to-check

When you are shifting to cloud solutions, Cloud Service Level Agreements are very important. A service level agreement is a constituent of complete service level management (SLM) strategy and is often regarded as a key component. It bridges the gap between customer’s and organization’s expectations. It acts as a good communication driver. SLA is an agreement and not a contract. It is an agreement between internal or external customers and service providers. It basically contains the documentation of the services which the service provider will provide.  It is a promise made by the service provider. SLA’s are part of most of the managed IT services which comprises of cloud-based help desk software, cloud services, Paas, Saas, IaaS, DBaaS etc.

More than the agreed service level, SLA defines the outline on what should be done when the cloud service provider cannot provide the availability. A well-defined SLA ensures improved communication between the two parties. But, to arrive on a well-defined SLA, many points are to be considered. Let’s look at some of the checkpoints that should be taken into consideration for a well-defined SLA

Features in Cloud SLA

A good cloud service provider will have a clear and a transparent SLA promising to provide the services mentioned in the SLA. Not only he will promise to provide them, but such promises will be backed with penalties and incentives. Must have features in an SLA are

System Availability: This commits 99% system availability or even higher.

Disaster Recovery: Backup will be taken in 24 hr in case of any data center disaster.

Data Ownership and Integrity: You should be capable to get your data out of service provider’s system if in case you do not want to continue with your service provider.

Response Time: The service provider should be able to categorize issue and respond accordingly.

Escalation procedures: You should be provided with escalation path in case you feel some issues should be escalated.

Maintenance: The service provider should announce maintenance activities at regular intervals and even notify the user.

Product Notifications: Regular updates on new product releases and upgrades should be informed by the service provider.

 

Level of service availability

Before arriving on Service Level Agreement, you should take into consideration the guarantees of SLA, the level of service availability etc. Availability is how frequently this thing goes down. Availability is expressed in terms of three, four or five nines. If provider expanses to more nines denoting lower ranks of nines which in short increases the cost. Along with availability, business function being provided should also be considered. The level of service availability defines two broad elements: What are the services and its reachability. However, the service varies as per different models of cloud (IaaS, PaaS, SaaS). In Iaas, the service provider provides data center infrastructure as a service. A fabric is responsible for covering the memory, processing, networking etc. Here, the availability is that the service provider has the responsibility of handling the fabric and running the fabric. In Paas, the service provider provides functionality of the platform as a service. The availability here is the reachability and usability of the platform. In SaaS, the service provider provides the availability of the data and application. Here the availability is in terms of the uptime. If you can have 99% availability of email service, it ensures that you can access the email service 99% of the time. Once you are clear with what model of service you will avail, the next step is to describe the availability. Availability of the services is specified 99% to 99.99%. Make sure that the availability and price should be matched with business requirements.

Time frame

An SLA should also mention the formation and expiry of the agreement. The time frame of the agreement is one such element that is always there in usual contracts as well. Start date of the SLA enables to start the tracking of IT performance. If new services are being provided or same services are renewed, to converse the changes to the users, sometime should be provided to communicate the details. In order to charge low costs to your users, you may have an agreement of 18-month lease for equipment which the consumer uses. If your SLA is not valid after 12 months, your customers will not pay extra penny after 12 months and at the end you will be faced with the problem of funding the equipment lease. SLA’s should also mention mean time to repair and mean time to respond. If there is severity 1 critical issue, the mean time to respond and repair should be less than the mean time taken to solve severity 3 critical issue.

Exceptions and Exemptions

One should carefully analyze the limitations and exemptions section of the Service Level Agreement. Here as well, the exceptions will vary as per different deployment models of cloud.

The most common exceptions of IaaS are:

  • The service provider is not at all responsible with what the users do and do not do with the servers.
  • The service provider is not responsible for the uncorroborated operating system connections.
  • The service provider is not responsible for any of the external networks if deployed.

In PaaS, the exceptions are here are somewhat similar to IaaS. The service provider is responsible for providing the platform and not what the user will function and run on top of it.

In SaaS, the service provider is responsible for providing the service as a whole and hence the cloud service provider is more accountable compared to IaaS and PaaS.

Some examples where the service provider will not be responsible are:

  • If a hosted email platform is purchased and is used to send spam or mass emails.
  • Infrastructure is set up for searching or creating in rainbow tables
  • Infrastructure is set up and deployed to examine some local attacks.

 

Reports on the implementation of SLA

Without setting any criteria for evaluation, there is no objective means in determining the performance. On a monthly basis, performance should be evaluated and poor performance should be recorded. Within the scope of every agreement, the service provider is required to present a Service Level Agreement Implementation Report which comprises of the tangible figures of the activities so conducted. Some Key Performance Indicators are set in the agreement which is to be followed by the service provider during the tenure of the agreement.  The indicators are decided with the mutual consent of both the parties.  Be careful and cautious enough while deciding the indicators or metrics for measuring the performance. It is important to note that SLA’s should be taken upon when the measurement of the performance against the set metrics can be tangibly measured. Therefore, some mechanisms should be used to seize data which will be able to detect any breaches caused in the SLA. With this reports can be generated which will act as a platform

It is important to note that SLA’s should be taken upon when the measurement of the performance against the set metrics can be tangibly measured. Therefore, some mechanisms should be used to seize data which will be able to detect any breaches caused in the SLA. With this reports can be generated which will act as a platform of discussion between the customer and service provider. Reports should comprise of information as to “why SLA was not met” and not focusing on the solution of the problem. The process of generation of the reports should be automated wherever possible.

The reporting period

Reports on achievement and non-achievement of SLA’s should be prepared periodically. The reporting period should not be short enough to average the under performance and not long enough keep the service provider and the customer occupied with a lot of information. Some common choices are rolling eight weeks, rolling four weeks, each accounting period or each accounting month. However, amongst all the options mentioned above, rolling reports are preferred by almost everyone. Reporting is helped by the use of Service database of all information associated with performance. If all the problems and incidents which caused service outages have been stated in the Database then it is easy to generate SLA reports. The contents of

Reporting is helped by the use of Service database of all information associated with performance. If all the problems and incidents which caused service outages have been stated in the Database then it is easy to generate SLA reports. The contents of the report will vary as per the type of SLA chosen. The simplest report will give information about service availability and global uptime which is split between normal office hours and extra time. This much information is adequate enough.

Calculations

The reports are prepared and all other necessary activities are also carried out. Now the question comes, how to judge or measure the performance of an SLA.  The simplest way to calculate the service provider’s overall achievement of SLA’s is creating a form of Availability Index.

(Multiple of actual availabilities for specific period / Multiple of target availabilities for specific period) X 1000

Eg. (99.3 x 98 x 99.1 x 94.5 / 99 x 99 x 98 x 95) X 1000

= 1019.4

This data can be used in trend graphs, newsletters etc.

Implications

After having all the reports and even measuring the performance, what are the implications of s SLA? With SLA you can properly construct the system to meet them. If the SLA’s are not met, it is but obvious that there will be data loss and business will suffer. The indicators or metrics which are used for measuring an SLA are critical for long-term success. A good SLA is important as it sets boundaries and expectations for customer commitments, assures key performance indicators for the customer service, key performance indicators for the internal organizations etc.

Conclusion

SLA’s are increasingly becoming an important part of the overall IT strategy. SLA’s help in measuring the performance as well as it gives a concrete scenario for building your system and optimum utilization of resources.

 

 

 

 

A Look At The Entirety of Cloud Computing: The Digital Cloud

March 12th, 2015 Comments off

marketing.digital.cloud_

When an Information Technology specialist hears a customer mention eNlight Cloud Hosting, it certainly puts a smile on the IT expert’s face. Obviously, the customer puts a lot of stock into the IT expert’s knowledge and ability to understand their concerns. It is also the responsibility of the Information Technician to discuss the different platforms and help the customer make the right decision. Of course, some specialist will always recommend the Cloud over anything else, regardless of the customer’s specific situation. This is obviously not good for the customer, but it also hurts the credibility of IT experts all around.

The Hype of the Cloud

There is a lot of hype surrounding private cloud computing, since it has the ability to push making decisions at business meetings. An enterprise, which is run by a single individual, might find it easier to do business with a single cloud provider, in order to avoid making a major investment in building their own IT interface. Of course, business people aren’t the only ones, as single people and regular individuals are also compelled by the overall idea of the cloud. Although it can definitely help businesses, individuals can also benefit from this wonderful technology.

The Cloud Concept

When you really break it down, the idea can be explained, as if it were a shuttle bus. Obviously, one single individual would prefer to own the bus and hire each and every one of the crew members, while also planning the bus’s schedule. Although some individuals may be able to support this idea, it is obviously not ideal and will immediately be ridiculed. Much of this has to do with the cheapness of the other options. Wouldn’t it be just as easy to hop on a bus, with a cheaply priced ticket? These big shuttle bus corporations have made the process simpler for the customer, by handling everything on their own, including managing expensive buses and overseeing the activity of their workers. With this possibility, there is really no need for the individual to own the aircraft, since they can travel for cheaper, with a simple ticket.

On the other hand, you could break down cloud orchestration and relate it to the technology and electricity, which is used in our homes each and every day. Obviously, massive power plants and generation centers have a lot of different technologies, buildings and employees to manage at one. However, the customer never has to worry about all of this stuff, since it is already taken care of for them. The entire process is simplified for the customer, who only needs to pay their monthly bill, in order to maintain electricity in their homes.

The History

When you really look at the history of hybrid cloud hosting, you will see that the idea began, by using software as a type of service, which is generally referred to as SaaS. In these specific cases, the service provider will setup their applications and allow the consumer to pay for them on a pay-per-use basis. Although this initially started with the consumer paying to access the individual application, it has now expanded and given the consumer the ability to pay for entire server control. When operating in this manner, it is normally referred to as PaaS.

When it comes down to it, you should really look at the definition of cloud computing, which was offered by the National Institute of Standards and Technology. The NIST suggest that Cloud computing is nothing more than a service, which offers on-demand access to a server. This offers the computer resources to the consumer, while the service provider has to maintain little interaction with either. When it is broken down, the Cloud orchestration model contains five different characteristics, including on-demand service, full network access, pooling of resource, elasticity, and a service, which is measured. On the other hand, the cloud server can be used in three different ways, as a software, platform or infrastructure.

Throughout the years, everyone has benefited from having the ability to create and share documents online, but cloud computing has made the process much more efficient and straight forward. Many are worried about the overall security of this type of infrastructure, but people are able to communicate and collaborate regardless of their geographical differences, which is definitely convenient.

Of course, this is not the only reason that the eNlight Cloud Solutions has become so overwhelmingly popular. It is also the overall cost of the service, which attracts different customers. On the other hand, the universal access of the cloud has also impressed and enticed many individuals to sign up. The ability to easily manage a massive amount of data is certainly enough to draw anyone, who is interested in Internet Technologies.

The Future of The Cloud

With the ability to edit and share documents online through cloud-based applications, the entire process has changed dramatically. In fact, the simple process of storing and sharing documents has taken on a whole new level. While there is some criticism due to security concerns, it is still very effective and allows individuals to share a large amount of data, at once. In fact, these individuals can use cloud apps, as a way to share textual information, documents, video and audio. With the cloud hosting, it is not a major concern, when it comes to the overall size of the documents, which was a problem in the past. Therefore, you will likely see this technology improve and gain popularity in the next few years.

Still, cloud computing has not taken the world, like a storm in the night. It has gone through adaptations and enhancement over the past few years, in order to improve and make it more reliable for users. The service has certainly become much more available and is beginning to make itself more visible in the IT world, which certainly makes it difficult for businesses, corporations and even the individual person to rule it out of their budgets. Of course, it is still likely that the cloud computing service will continue to thrive and grow, as it is updated and enhanced on a regular basis.

Cloud Computing: Facts And Truths That You Should Know

March 9th, 2015 Comments off

cloud-computing-facts-truthsIn recent months, I’ve been poking around various clouds. Along the way, I realized that they were not working the way I expected. Virtual machines are not as interchangeable or as cheap as they seem. Moving to the cloud is not as simple as it should be. In other words, anyone who has thought about the word “cloud” as a synonym for “perfection” or “painless” will be very disappointed.

You cannot say that there is no truth in what the companies claim about cloud, but there is much exaggeration and also plenty of complicated details that are not immediately obvious. In essence, the cloud are not miracle workers. The improvements are incremental, not revolutionary.

To keep our expectations in more realistic level, here’s a list of what we should really expect from the clouds.

1: Uneven Performance of Virtual Machines

Cloud uncomplicated many of the steps related to the purchase of a server. The desire? Press a button, choose your operating system and get the root password. Everything else should be treated by the cloud, which takes care of all computational tasks behind the curtain.

But the benchmarks taught me that virtual machines behave quite differently. Even if you buy a body having the same amount of RAM running the same version of the operating system, you will find surprisingly different performances. There are different chips and hypervisors running underneath it.

2: Many Choices

A great promise from cloud? Rent a Cloud and see what it can do. Your boss may want to invest in buying a rack or collocating the architecture to data center. But spending a few hours in understanding cloud can help in easy decision making.

To rent Cloud under pay per use model is an ideal way for people interested in trying out some features. But with the increasing choice also increases the complexity of analysis and uncertainty about what is really needed.

3: Eternal Instances

After spending some time with the installation of software and settings, many people give up on shut down a virtual machine that costs so little per hour, leaving them in the background waiting for work.

Generally, an audit of the lists of virtual machines takes longer than the cost of leaving them running another month.

I think the cloud companies will make big money with servers that are standing there, waiting for further instructions.

4: Difficulty Dealing with SaaS Pricing

Software as a service is another temptation in the cloud. You do not need to buy the license and install. You send your bits to an API and it does everything for you. But calculate the cost of software as a service involves analyzing several variables.

One of the ways for a company to be aware of the costs is to start the execution of applications in the cloud computing platform, and then calculate the actual costs. But this raises a number of additional issues, such as how to account for the characteristic fluctuating prices of an immature market.

To prove the cloud can be a rather nebulous process, even comparing a private cloud with the traditional IT infrastructure model.

5: Embedded Solutions

When Google announced Google App Engine, it seemed that the device would make the simplest cloud computing. The problem is that the product’s owner needs to use it. This means that you will be tied to it until you can rewrite the software. And who has time to do that?

Not surprisingly, the standard OpenStack is gaining momentum. Everyone is frightened by the possibility of seeing tied to a provider, no matter how good he can be. It takes more flexibility.

6: Security Is Still A Mystery

At first glance, it seems that you completely control your machine. You and only you set the root password. If the OS is secure and patches are installed, that’s OK, right?

All the clouds are far from clear what actually happens under the hypervisor. Cloud providers are far from offering as safe environments like a cage in the center of the room where you can lock your server.

7: The Cost Estimate Is Not Easy

You should buy a faster machine to 7 cents per hour or three slower machines to 2.5 cents per hour? As each vendor has its own way to charge for bandwidth, storage and other resources, expect to spend hours analyzing the use of various sizes of servers. Then put all this data into a spreadsheet to determine the cheapest configuration.

8: Moving Data Is Not Easy

You like the idea of buying computing power per hour? But often the purchase is the smallest part of the work. Getting your data in the cloud can be a substantial task. If you are loaded log files or large data sets, you may be spending too much time in just moving the data you need.

So some suppliers are making it easy to store data locally and then buy computation time when you need it.

9: The Data Is Not Guaranteed As Well

Cloud contracts rarely have disaster recovery. Some vendors are starting to be more clear about its guarantees. Some have terms of service that explain a little better what they cover and do not cover. Others are responding faster and better questions about the physical location of the data, knowing that the answer may be crucial for regulatory compliance or security issues. The geographical distribution is critical for disaster recovery.

One of the most important items in the hiring of public cloud is the agreement of the level of service quality (SLA). Users need better answers from cloud providers on the finer points of management of availability and vulnerability of services before signing contracts. It is important for customers to know not just where your data is stored as who will access them.

10: No One Knows What Laws Apply

It is easy to imagine that the cloud is living in a Shangri-La, away from those pesky laws and rules that complicate the lives of companies. We would all like to believe that cyberspace is a beautiful place, full of harmony and mutual respect, which dispenses lawyers. What is a half-truth, because no one really knows what laws apply.

Not all the laws apply, because the Web extends throughout. As services and solutions are delivered anywhere in the world, this feature of the concept of cloud computing challenges the current legal model, which is based on local laws. As a result, the legal risks are even greater than those of other traditional IT outsourcing contracts, experts in digital right.

Join this borderless world requires more caution in drawing up contracts with service providers. It is important that the contract contains clauses on privacy issues and data availability. Companies should know the risks of hosting information outside of India. In case of a court order, the data confidentiality may be broken, depending on the privacy law and data protection applied by the country where the server is installed.

11: Extras Will Reach

The cloud business seems to be following the same model for collection of hotel companies and airlines. They will do anything to keep the cost of the main service as low as possible, because they know that price is the determining factor for the purchase of the service. But will try to compensate for this cheap price with add-ons.

The problem for most customers is that it is increasingly difficult to predict how many extra services are used. You can estimate the amount of data that will flow between their machines and the server in the cloud? Some cloud companies charge for it. If a programmer uses a data structure such as XML, can quadruple their bandwidth costs.

12: Responsibility For Backup Still Rests on You

It is tempting to buy the marketing hype and think of the cloud as a giant collection of computing resources. When you need to chew numbers, you cast your spell across the ocean and the answers are reborn from the mists.

If you think the cloud will save the responsibility of backing up your data to you, you are mistaken. Underneath it all, the virtual machines are so fragile as the machines on your desk.

In practice, the machines are machines only. If you build a backup plan for your server today, then you should build one for your software in the cloud as well. But It can also fail.

Tips for Safely Storing Files in the Cloud

October 2nd, 2014 Comments off

Storing-Files-in-Cloud

Storing in the cloud has become an interesting option to keep your saved documents. Nowadays there are numerous servers that bring this functionality, allowing not only store your files but access them anywhere. However, like any other service on the Internet, it is important to take certain precautions, so we will list some tips for storing in the cloud safely.

The primary tip is to know really difficult to formulate passwords. We’re talking about sequences which involve numbers and letters, upper case or not. After all, it is the password that will protect your account from any hackers who steal your files. At the same time it is recommended to avoid sharing folders and passwords with others. It is a personal service, so restrict it only to you.

Look for reliable and preferably free servers. A good tip is to store files on Google Drive with a capacity of 15 GB free. And other companies also offer free versions, as Apple iCloud, Dropbox, Microsoft Skydrive, Cloud hosting by Host.co.in etc. Interestingly these services are secure, because we are dealing with serious companies that already work with online storage.

Back up regularly. After all, online storage is subject to instabilities of the server, which can cause problems if you want to access a particular service at that time. Create spreadsheets to know what you are saving is an important form of control.

Finally, do not forget to sync your files. Maintain a constant update of what you saves. In current times, the cloud is an indispensable tool. Make use of this amazing tool.

Questionnaire for Cloud Security Requirements

May 6th, 2014 Comments off

cloud-security-questionnaire

Today, cloud computing is allowing companies to outsource their data processing to commercial providers, have become a popular and rapidly growing market. But the nature of such services makes customers think primarily about data security.

Specialists of the Alliance for the cloud security (Cloud Security Alliance), which includes companies like eBay, Intuit, DuPont, and ING, the questionnaire was drawn up for the certification of web hosting providers.

Of course, ideally it would personally inspect the data center and meet with staff. If only to see firsthand what all is specified in your contract, there is in reality. But in fact – it’s modal facility with very strict control system on the territory of which, in most cases, the usual customers are not allowed.

So if you are not able to visit the data center provider of e, then all you need can be determined using the probing questions asked. “This set of questions will facilitate the identification of key issues, development of best practices and methods of control, it should help organizations to build a certification process of cloud providers, according to safety”, – say in the CSA.

So:

– Does the provider regularly test the possibility of entering into the system, as well as internal and external security audits, with results that can read the customers?

– Do customers have an opportunity to perform tests on the vulnerability?

– Whether the data is divided logically for different clients and whether they are encrypted for each client, the data is one of them which is accidentally been issued with the data of another, for example, at the request of law enforcement?

– Will the provider be able to recover the data from each client in case of loss?

– What are the measures for the protection of intellectual property service provider takes?

– Does the provider registration of virtual and physical servers used by each client, and whether he could guarantee that the data is stored only in certain countries, if required by the relevant national legislation on the storage media?

– What is your host, policy response to requests for data on clients from government agencies?

– What policy is used by the provider to preserve customer data and whether it has the opportunity to follow the policy of the customer, requiring the removal of data from the network provider?

– Does the provider have an inventory of their assets and a history of relationships with suppliers?

– Was he teaching his staff to make use of security controls – their own and client – and documented such training?

– Is the monitoring and control of user has access rights?

– What are the measures and the scale of the response to security incidents? And also what is responsible for this provider and client.

And although this is not an exhaustive list of tips, but even this is enough to get an idea about the level of security and the right choice of service provider.