Posts tagged ‘Cloud’

Tackling the resource gap in the transition to hybrid IT

Is hybrid IT inevitable? That’s a question we ask customers a lot. From our discussions with CIOs and CEOs there is one overriding response and that is the need for changimagese. It is very clear that across all sectors, CEOs are challenging their IT departments to innovate – to come up with something different.

Established companies are seeing new threats coming into the market. These new players are lean, hungry and driving innovation through their use of IT solutions. Our view is that more than 70 percent of all CEOs are putting a much bigger ask on their IT departments than they did a few years ago.

There has never been so much focus on the CIO or IT departmental manager from a strategic standpoint. IT directors need to demonstrate how they can drive more uptime, improve the customer experience, or enhance the e-commerce proposition for instance, in a bid to win new business. For them, it is time to step up to the plate. But in reality there’s little or no increase in budget to accommodate these new demands.

We call the difference between what the IT department is being asked to do, and what it is able to do, the resources gap. Seemingly, with the rate of change in the IT landscape increasing, the demands on CIO’s by the business increasing and with little or no increase in IT budgets from one year to the next, that gap is only going to get wider.

But by changing their way of working, companies can free up additional resources to go and find their innovative zeal and get closer to meeting their business’ demands. Embracing Hybrid IT as their infrastructure strategy can extend the range of resources available to companies and their ability to meet business demands almost overnight.

Innovate your way to growth

A Hybrid IT environment provides a combination of its existing on-premise resources with public and private cloud offerings from a third party hosting company. Hybrid IT has the ability to provide the best of both worlds – sensitive data can still be retained in-house by the user company, whilst the cloud, either private or public, provides the resources and computing power that is needed to scale up (or down) when necessary.

Traditionally, 80 percent of an IT department’s budget is spent just ‘keeping the lights on’. That means using IT to keep servers working, powering desktop PCs, backing up work and general maintenance etc.

But with the CEO now raising the bar, more innovation in the cloud is required. Companies need to keep their operation running but reapportion the budget so they can become more agile, adaptable and versatile to keep up with today’s modern business needs.

This is where Hybrid IT comes in. Companies can mix and match their needs to any type of solution. That can be their existing in-house capability, or they can share the resources and expertise of a managed services provider. The cloud can be private – servers that are the exclusive preserve of one company – or public, sharing utilities with a number of other companies.

Costs are kept to a minimum because the company only pays for what they use. They can own the computing power, but not the hardware. Crucially, it can be switched on or off according to needs. So, if there is a peak in demand, a busy time of year, a last minute rush, they can turn on this resource to match the demand. And off again.

This is the journey to the Hybrid cloud and the birth of the agile, innovative market-focused company.

Meeting the market needs

Moving to hybrid IT is a journey.  Choosing the right partner to make that journey with is crucial to the success of the business. In the past, businesses could get away with a rigid customer / supplier relationship with their service provider. Now, there needs to be a much greater emphasis on creating a partnership so that the managed services provider can really get to understand the business. Only by truly getting under the skin of a business can the layers be peeled back to reveal a solution to the underlying problem.

The relationship between customer and managed service provider is now also much more strategic and contextual. The end users are looking for outcomes, not just equipment to plug a gap.

As an example, take an airline company operating in a highly competitive environment. They view themselves as being not in the people transportation sector, but as a retailer providing a full shopping service (with a trip across the Atlantic thrown in). They want to use cloud services to take their customer on a digital experience, so the minute a customer buys a ticket is when the journey starts.

When the passenger arrives at the airport, they need to check in, choose the seats they want, do the bag drop and clear security all using on-line booking systems. Once in the lounge, they’ll access the Wi-Fi system, check their Hotmail, browse Facebook, start sharing pictures etc. They may also choose last minute adjustments to their journey like changing their booking or choosing to sit in a different part of the aircraft.

Merely saying “we’re going to do this using the cloud” is likely to lead to the project misfiring. As a good partner the service provider should have the experience of building and running traditional infrastructure environments and new based on innovative cloud solutions so that they can bring ‘real world’ transformation experience to the partnership. Importantly they must also have the confidence to demonstrate digital leadership and understand of the business and its strategy to add real value to that customer as it undertakes the journey of digital transformation.

Costs can certainly be rationalised along the way. Ultimately with a hybrid system you only pay for what you use. At the end of the day, the peak periods will cost the same, or less, than the off-peak operating expenses. So, with added security, compute power, speed, cost efficiencies and ‘value-added’ services, hybrid IT can provide the agility businesses need.

With these solutions, companies have no need to ‘mind the gap’ between the resources they need and the budget they have. Hybrid IT has the ability to bridge that gap and ensure businesses operate with the agility and speed they need to meet the needs of the competitive modern world.

Written by Jonathan Barrett, Vice President of Sales, CenturyLink, EMEA

Advertisements

Google signs five deals for green powering its cloud services

Google-officeCloud service giant Google has announced five new deals to buy 781MW of renewable energy from suppliers in the US, Sweden and Chile, according to a report on Bloomberg.

The deals add up to the biggest-ever purchase of renewable energy ever by a company that is not a utility, according to Michael Terrell, Google’s principal of energy and global infrastructure.

Google will buy 200 megawatts of power from Oklahoma-based Renewable Energy Systems Americas’s Bluestem wind project. From the same US state another 200 megawatts will be contributed by Great Western wind project run by Electricite de France. In addition, Google will also power its cloud services with 225 megawatts of wind power from independent power producer Invenergy.

Google’s data centres and cloud services in South America could become carbon free when the 80 megawatts of solar power that it has ordered from Acciona Energia’s El Romero farm in Chile comes online.

In Scandinavia the cloud service provider has agreed to buy 76 megawatts of wind power from Eolus Vind’s Jenasen wind project to be built in Vasternorrland County, Sweden.

In July, Google committed to tripling its purchases of renewable energy by 2025. At the time, it had contracts to buy 1.1 GW of sustainably sourced power.

Google’s first ever green power deal was in 2010 when it agreed to buy power from a wind farm in Iowa. Last week, it announced plans to purchase buy 61 megawatts from a solar farm in North Carolina.

The state of cloud computing in Europe #CloudWF

Guest Blog with IBM

Author: Simon Porter

Cloud computing is the most touted technology in the global business landscape today. Europe is no exception.

There are two main ways we’re seeing businesses take advantage of the cloud in Europe. First, there are the smaller, innovative, and born-on-the-cloud startup companies that use it to help them disrupt existing industries by getting to market faster and with less upfront capital investment.

The second area where we’re seeing European companies take advantage of cloud is at more established enterprises looking to enter new, international markets. As companies here seek to become more global, they’re looking toward non-European markets—whether by selling into those markets or tapping into suppliers. In these cases, cloud empowers them to enter these new markets by providing the flexibility, speed and scalability needed to be a global player.

Cloud also enables businesses to market and sell to customers in new and more efficient ways. With the proliferation of smartphones and social media, business success relies on turning this technology into new sales channels. This is often referred to as systems of engagement, and with unpredictable volumes, it’s ideally suited to cloud.

The economic climate in Europe is improving, but it remains very competitive. It is critical for businesses to optimize their supply chains and lower their sales and support costs. Applying sophisticated analytics is one effective way of doing this. In the past, this was prohibitively expensive. But cloud enables analytics-as-a-service, removing the need and cost for a large up-front investment in an IT system that may be used only a few hours per month.

Challenges in cloud adoption persist

According to a Eurostat study released this past year, only 19 percent of European businesses used cloud computing services in 2014. Compare that to a recent RightScale study that reports 82 percent of U.S. enterprises as having a hybrid cloud strategy (up from 74 percent in 2014), and it would appear that Europe is lagging. However, that’s only part of the story.

You can expect the European cloud adoption numbers to rise sharply this year and even more in years to come. But as with any emerging technology, there remains barriers to adoption.

Chief among those barriers is security.

According to a recent Cloud Industry Forum poll, 70 percent of U.K. executives cited data security among their biggest concerns in moving to cloud. That marks an 11 percent year-over-year increase.

What IT departments in Europe are seeing is something quite different than what the rest of the world is experiencing, and that stems from data location and security. A lot of the questions around security and data location are driven by perceptions in the market that aren’t always true. Security in a cloud-based solution will often be much stronger than that of an on-premises, in-house IT solution.

To remain competitive, European businesses must work through security challenges—and I fully believe that they will. It’s ultimately not a matter of technical or legal challenges preventing cloud adoption in Europe—it’s about business leaders understanding the transformational benefits cloud can bring to their business, and then typically for midsize businesses taking advantage of this by using a local trusted Cloud Service Provider.

The good news  is that IBM is continuing to open data centers in Europe. We now have centers in the U.K., Netherlands, Germany, France, and most recently announced, Italy. But even with this span of locations, customers want to keep their data in country.

European SMBs typically lack resources and the IT skills to take advantage of this new kind of capability. They need to turn to a local service provider that can essentially be their IT department. At IBM, we’re continuing to expand our partnerships with local cloud service providers as a means of enabling local data and secure environments with IBM’s Managed Service Providers.

A move to hybrid 

In the business world, we recognize clients have already made investments in core IT systems. We find that European customers want to protect and enhance them with new, innovative capabilities that enable them to make better business decisions faster with advanced analytics. Companies are also able to reach new customers and markets with multi-channel marketing and sales capabilities, both largely based on cloud-enabled digital and social technologies.

For example, a client may have an existing enterprise resource planning (ERP) system that they have invested a lot of time and money in over the years. They still need to see a return on that investment. It is impractical to completely replace it with a new solution, but perhaps enhancing it with social analytics or social engagement could help them in their customer service and marketing.

Combining mission-critical, on-premises systems with new cloud-based systems of engagement is an example of a common hybrid cloud solution. This is how many businesses in Europe protect their existing investments in IT while taking advantage of new delivery models.

An eye toward the future 

The world is only getting flatter. There are multiple new entrants in many industries, and existing businesses will have to differentiate their own offerings to remain competitive. Who would have thought the taxi industry could be disrupted in the way that Uber has done? Cloud can be the key enabler for businesses to innovate around new products and channels faster and in a lower risk manner.

…………………………………………………………………………………………………………………

IBM will be at the Cloud World Forum on Stand D150, taking place on the 24th – 25th June 2015.

Tony Morgan, Client Chief Innovation Officer GTS Europe at IBM will be speaking on Day 1 at 11:05am in Theatre C: DevOps & Containerisation on ‘Speaker out of the Shadows: Managing Innovation with Cloud.’ 

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

Monetizing the Internet of Things: Will All These Connected Devices Pay Off? #CloudWF

Guest Blog with Avangate

Author: Michael Ni, CMO/SVP, Marketing and Products, Avangate

Sometimes it seems like just yesterday that everything was getting “cloud-ified,” from photo sharing to customer relationship management, but the move to the cloud is actually a couple of years old these days. But now that we all have our documents stored in the cloud (and our heads out of the clouds), everybody’s looking for a clear path toward success in the latest trend: the Internet of Things.

Just like the cloud before it, the Internet of Things is now top of mind for software professionals. Its promise has been nascent for a long time: although Dick Tracy’s 2-Way Wrist Radio first appeared in 1946, connected devices like the FitBit and Apple Watch are just starting to get in the hands – or on the wrists – of everyday folks.

With broader adoption of connected devices come both opportunities and challenges. Even the companies that are able to sell IoT hardware successfully find themselves needing to develop and monetize complementary services to help users get the most out of their devices. And software-focused companies that don’t have devices need new a way to get in on the IoT and the billions it’s expected to bring in. That way is through data.

While the IoT started out with connected sensors, it soon became clear that simply sensing data wouldn’t be enough. Just like storing content in the cloud also required building interfaces that made it easy for users to access cloud content, IoT sensors now need to produce data that’s easy for people to find, understand and use. And because IoT data is so valuable (not to mention expensive), there needs to be a way for companies to monetize it. So if wave 1 of the IoT trend involved simply creating the sensors, wave 2 involves monetizing them and the data they create.

As a result, more and more software vendors have started staking a claim in the IoT. At Avangate, we’ve been helping companies like Bitdefender monetize their IoT offerings. Bitdefender offers a “security of things” solution called BOX, a small device that scans for IoT threats on a local WiFi connection. By monitoring the way your smart devices stay connected, BOX finds and protects against possible threats to your connected information. By helping Bitdefender easily monetize its entry into the IoT, including not only the device itself but also associated data, we’re showing the importance and ease of monetizing IoT devices and the data they produce.

And that’s the key: commerce absolutely has to run in the background of every IoT play. No matter how affordable a device is up front, or if streams of data are free for now, devices and data both cost a significant amount to create, maintain, and provide in ways that really work for consumer and business customers. As a result, to truly succeed in the IoT, software companies need to be able to package and sell data derived from connected devices in ways that will benefit other entities as well.

In the end, it’s clear that that the desperate need for IoT data monetization is actually a massive opportunity. Companies are still scrambling to create devices and support data, and not enough entities are thinking about how to monetize it. Those who find themselves able to successfully package and sell information in the IoT era may find themselves enjoying Salesforce style status and riding high on the wave of the future as the IoT truly takes off.

…………………………………………………………………………………………………………………

Avangate will be exhibiting at the Cloud World Forum on Stand D48, taking place on the 24th – 25th June 2015.

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

The State of the Cloud: Already Everywhere, and Lots of Room to Grow #CloudWF

Guest Blog with Equinix

Enterprise cloud usage is nearly universal, but there’s still significant room for cloud growth.

That sums up one of the key findings of RightScale’s 2015 “State of the Cloud Report.” The survey of 930 technical professionals indicates the enterprise has moved past its initial cloud skittishness and is getting quite comfortable investigating what the cloud can really do.

Capture

The survey showed 93% of respondents have adopted cloud, roughly the same as the prior year. Hybrid cloud is also the preferred strategy of 58% of respondents, compared to 30% who are public cloud-only and 5% who are private cloud-only.

One key difference from 2014 is that 38% of cloud users are now classified by RightScale as “cloud explorers,” compared to 25% just a year ago when “cloud beginners” was the biggest category. “Cloud explorers” already have multiple projects and applications in the cloud and are looking to expand and improve their cloud use.

The survey also found plenty of room for cloud expansion, with 68% of enterprise respondents reporting that less than a fifth of their applications are currently running in the cloud. Most respondents (55%) also report that another fifth of their applications are already built on cloud-friendly architectures.

Here’s more of what we found most interesting in the State of the Cloud report:

Going public, staying private

Public cloud is being used by 88% of organizations, while 63% are using private cloud. But private cloud is still carrying a heavier workload, with 13% of enterprises running more than 1,000 virtual machines (VMs) in the public cloud and 22% running more than 1,000 virtual machines in private cloud. The survey also indicated enterprises are expecting to grow public cloud workloads more quickly.

Central IT gets more cloud comfortable

The survey authors note that in 2014, business units envisioned a more limited role for central IT in cloud purchasing decisions, likely because they felt central IT was generally too cautious. But central IT’s view of the cloud may be evolving. The survey indicated central IT concerns about cloud security have dropped, with 41% now reporting it as a significant challenge, compared to 47% a year ago. In addition, 28% of central IT respondents report public cloud as the top priority in 2015, compared to 18% in 2014.

More of the same

Respondents cited the same cloud benefits and challenges in 2015, but in many cases mentioned them more frequently. For instance, “greater stability,” “faster access to infrastructure,” and “high availability” were again the top three benefits, but each was cited by a greater percentage of respondents:

Capture2

A similar pattern was seen when respondents were asked about cloud challenges. “Security,” “lack of resources/expertise” and “compliance” again appeared as major concerns, but were referred to by a greater percentage of respondents, compared to last year:

Capture3

Learn more about how Equinix can help your enterprise realize cloud benefits and meet cloud challenges.

…………………………………………………………………………………………………………………

Equinix will be at the Cloud World Forum on Stand D170, taking place on the 24th – 25th June 2015. Don’t miss their session on ‘An Expedition through the Cloud’ in the Employee Experience Theatre at 10.35am on Day 2.

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

Risks of SaaS supplier failure & how to effectively mitigate them #CloudWF

Guest Blog with Kemp Little Consulting & NCC Group

The cloud is here to stay and according to a recent survey, organisations are going to be investing more in cloud services to support their core business operations.

But have companies properly considered the risks of SaaS supplier failure if the software is supporting their core processes?

The Kemp Little Consulting (KLC) team has been working with NCC Group to identify some of the risks of SaaS supplier failure and to identify the main problems that end user organisations would need to solve to effectively mitigate these risks.

In the on-premise world, the main way of mitigating against software supplier failure is Software Escrow. This was designed as a means of gaining access to source code for an application in the event of supplier failure.

If a supplier goes bust, there is no short term problem as the application and the business processes supported by the application continue to work and the corporate data remains within the control of the end user.

However, the end user company has a  problem as they will not be able to maintain the application long term and this issue is effectively solved by Software Escrow and related services such as verification.

In the cloud arena, however, the situation is different. If the supplier fails there is potentially an immediate problem of the SaaS service being switched off almost straightaway because the software supplier no longer has the cash to continue to pay for its hosting service or to pay its key staff.

For the end user, this means that they no longer have access to the application; the business process supported by the application can no longer operate and the end user organisation loses access to their data.

The business impact of this loss will vary depending upon the type of application affected:

  • Business Process Critical (e.g. finance, HR, sales and supply chain)
  • Data Critical (e.g. analytics or document collaboration)
  • Utility (e.g. web filtering, MDM, presentational or derived data)

In our research, we found that both suppliers of cloud solutions and end user organisations had not properly thought through the implications of these new risks, nor the services they would require to mitigate against the risk of supplier failure.

The primary concerns that end user customers had were around their business critical data. They were concerned by lack of access to data; loss of data; the risk of compliance breach by losing control of their data and how they might re-build their data into usable form if they could get it back. There was also concern about access to funding to keep the infrastructure running in the SaaS vendor in order to buy time to make alternative arrangements.

They were much less concerned about access to the application or getting access to the source code.

This is understandable as their primary concern would be getting their data back and porting it to another solution to get the business back up and running.

In a separate part of our study, the Kemp Little commercial team looked at the state of the market of the provisions generally found in SaaS contracts to deal with the event of supplier failure.  The team found that even if appropriate clauses were negotiated into the contract at the outset, there may be real difficulties in practically enforcing those terms in an insolvency situation.

End user organisations were more concerned than SaaS suppliers about their capability to deal with all of these problems and were amenable to procuring services from third parties to help them mitigate the risks and solve the problems they could not solve purely by contractual means.

End users were also concerned that many SaaS solutions are initially procured by “Shadow-IT” departments as part of rapid business improvement projects and deployed as pilots where the business risks of failure are low.

However, these solutions can often end up being rolled out globally quite quickly and key parts of the business become dependent upon them by stealth.

It is therefore considered important for companies to develop a deep understanding of their SaaS estate and regularly review the risks of supplier failure and put in place appropriate risk mitigation measures.

KLC recently worked with global information assurance specialist NCC Group to help it enhance the service model for its SaaS Assured service.

This article was originally posted on the Kemp Little Blog and can be found here.

…………………………………………………………………………………………………………………

John Parkinson, Global SaaS Business Leader at NCC Group will be speaking at the Cloud World Forum on 24th June 2015 at 12.45pm.

His talk will take place in Theatre D: Cloud, Data Governance & Cyber Security on ‘Outsourcing to Software as a Service? Don’t Overlook the Critical Commercial Security Risks.’

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

Scaling Your Application Efficiently – Horizontal or Vertical? #CloudWF

Guest Blog with AppDynamics

Author: Eric Smith at AppDynamics

Anyone deploying an application in production probably has some experience with scaling to meet increased demand. A generation ago, virtualization made scaling your application as simple as increasing your instance count or size. However, now with the advent of cloud, you can scale to theoretical infinity. Maybe you’ve even set up some auto-scaling based on underlying system metrics such as CPU, heap size, thread count, etc. Now the question changes from “Can I scale my environment to meet demand?” (if you add enough computing resources you probably can), to “How can I efficiently scale my infrastructure to accommodate my traffic, and if I’m lucky maybe even scale down when needed?” This is a problem I run into almost every day dealing with DevOps organizations.

If your application environment looks like this (if so, I’d love to be you):

Screen-Shot-2015-03-17-at-10.57.36-AM

You can probably work your way through to the solution, eventually. Run a bunch of load tests, find a sweet spot of machine size based on the performance under the test parameters, and bake it into your production infrastructure. Add more instances to each tier when your CPU usage gets high. Easy. What if your application looks like this?

Screen-Shot-2015-03-17-at-10.57.45-AM

What about when your application code changes? What if adding more instances no longer fixes your problem? (Those do cost money, and the bill adds up quickly…)

The complexity of the problem is that CPU bounding is only one aspect — most applications encounter a variety of bounds as they scale and they vary at each tier. CPU, memory, heap size, thread count, database connection pool, queue depth, etc. come into play from an infrastructure perspective. Ultimately, the problem breaks down to response time: how do I make each transaction as performant as possible while minimizing overhead?

The holy grail here is the ability to determine dynamically how to size my app server instances (right size), how many to create at each level (right scale) and when to create them (right time). Other factors come into play as well such as supporting infrastructure, code issues, and the database — but let’s leave that for another day.

Let me offer a simple example. This came into play recently when working with a customer analyzing their production environment. Looking at the application tier under light/normal load, it was difficult to determine what factors to scale, we ended up with this:

Screen-Shot-2015-03-17-at-10.57.54-AM

Response time actually decreases toward the beginning of the curve (possibly a caching effect?). But if you look at the application under heavier load, things get more interesting. All of a sudden you can start to see how performance is affected as demand on the application increases:

Screen-Shot-2015-03-17-at-10.58.02-AM

Looking at a period of heavy load in this specific application, hardware resources are actually still somewhat lightly utilized, even though response time starts to spike:

Screen-Shot-2015-03-17-at-10.58.12-AM
Screen-Shot-2015-03-17-at-10.58.24-AM

In this application, it appears that response time is actually more closely correlated with garbage collection than any specific hardware bound.

While there is clearly some future effort here to look at garbage collection optimization, in this case optimizing best fit actually comes down to determining desired response time, maximum load for a given instance size maintaining that response time, and cost for that instance size. In a cloud scenario, instance cost is typically fairly easy to determine. In this case, you can normalize this by calculating volume/(instance cost) at various instance sizes to determine a better sweet spot for vertical scale.

Horizontal scale will vary somewhat by environment, but this tends to be more linear — i.e. each additional instance adds incremental bandwidth to the application.

There’s still quite a bit more room for analysis of this problem, like resource cost for individual transactions, optimal response time vs. cost to achieve that response time, synchronous vs. asynchronous design trade-offs, etc. but these will vary based on the specific environment.

Using some of these performance indicators from the application itself (garbage collection, response time, connection pools, etc.) rather than infrastructure metrics, we were able to quickly and intelligently right size the cloud instances under the current application release as well as determine several areas for code optimization to help improve their overall efficiency. While the code optimization is a forward looking project, the scaling question was in response to a near term impending event that needed to be addressed. Answering the question in this way allowed us to meet both the near term impending deadline, but also remain flexible enough to accommodate any forthcoming optimizations or application changes.

Interested to see how you can scale your environment? Check out a FREE trial now!

…………………………………………………………………………………………………………………

John Rakowski, Chief Technology Strategist at AppDynamics will be speaking at the Cloud World Forum on 25th June 2015 at 2.25 pm.

His talk will take place in Theatre C: SDE & Hyperscale Computing on ‘Three Rules for the Digital Enterprise’. 

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

Tag Cloud

%d bloggers like this: