Posts tagged ‘cloud service’

Google signs five deals for green powering its cloud services

Google-officeCloud service giant Google has announced five new deals to buy 781MW of renewable energy from suppliers in the US, Sweden and Chile, according to a report on Bloomberg.

The deals add up to the biggest-ever purchase of renewable energy ever by a company that is not a utility, according to Michael Terrell, Google’s principal of energy and global infrastructure.

Google will buy 200 megawatts of power from Oklahoma-based Renewable Energy Systems Americas’s Bluestem wind project. From the same US state another 200 megawatts will be contributed by Great Western wind project run by Electricite de France. In addition, Google will also power its cloud services with 225 megawatts of wind power from independent power producer Invenergy.

Google’s data centres and cloud services in South America could become carbon free when the 80 megawatts of solar power that it has ordered from Acciona Energia’s El Romero farm in Chile comes online.

In Scandinavia the cloud service provider has agreed to buy 76 megawatts of wind power from Eolus Vind’s Jenasen wind project to be built in Vasternorrland County, Sweden.

In July, Google committed to tripling its purchases of renewable energy by 2025. At the time, it had contracts to buy 1.1 GW of sustainably sourced power.

Google’s first ever green power deal was in 2010 when it agreed to buy power from a wind farm in Iowa. Last week, it announced plans to purchase buy 61 megawatts from a solar farm in North Carolina.

Advertisements

Risks of SaaS supplier failure & how to effectively mitigate them #CloudWF

Guest Blog with Kemp Little Consulting & NCC Group

The cloud is here to stay and according to a recent survey, organisations are going to be investing more in cloud services to support their core business operations.

But have companies properly considered the risks of SaaS supplier failure if the software is supporting their core processes?

The Kemp Little Consulting (KLC) team has been working with NCC Group to identify some of the risks of SaaS supplier failure and to identify the main problems that end user organisations would need to solve to effectively mitigate these risks.

In the on-premise world, the main way of mitigating against software supplier failure is Software Escrow. This was designed as a means of gaining access to source code for an application in the event of supplier failure.

If a supplier goes bust, there is no short term problem as the application and the business processes supported by the application continue to work and the corporate data remains within the control of the end user.

However, the end user company has a  problem as they will not be able to maintain the application long term and this issue is effectively solved by Software Escrow and related services such as verification.

In the cloud arena, however, the situation is different. If the supplier fails there is potentially an immediate problem of the SaaS service being switched off almost straightaway because the software supplier no longer has the cash to continue to pay for its hosting service or to pay its key staff.

For the end user, this means that they no longer have access to the application; the business process supported by the application can no longer operate and the end user organisation loses access to their data.

The business impact of this loss will vary depending upon the type of application affected:

  • Business Process Critical (e.g. finance, HR, sales and supply chain)
  • Data Critical (e.g. analytics or document collaboration)
  • Utility (e.g. web filtering, MDM, presentational or derived data)

In our research, we found that both suppliers of cloud solutions and end user organisations had not properly thought through the implications of these new risks, nor the services they would require to mitigate against the risk of supplier failure.

The primary concerns that end user customers had were around their business critical data. They were concerned by lack of access to data; loss of data; the risk of compliance breach by losing control of their data and how they might re-build their data into usable form if they could get it back. There was also concern about access to funding to keep the infrastructure running in the SaaS vendor in order to buy time to make alternative arrangements.

They were much less concerned about access to the application or getting access to the source code.

This is understandable as their primary concern would be getting their data back and porting it to another solution to get the business back up and running.

In a separate part of our study, the Kemp Little commercial team looked at the state of the market of the provisions generally found in SaaS contracts to deal with the event of supplier failure.  The team found that even if appropriate clauses were negotiated into the contract at the outset, there may be real difficulties in practically enforcing those terms in an insolvency situation.

End user organisations were more concerned than SaaS suppliers about their capability to deal with all of these problems and were amenable to procuring services from third parties to help them mitigate the risks and solve the problems they could not solve purely by contractual means.

End users were also concerned that many SaaS solutions are initially procured by “Shadow-IT” departments as part of rapid business improvement projects and deployed as pilots where the business risks of failure are low.

However, these solutions can often end up being rolled out globally quite quickly and key parts of the business become dependent upon them by stealth.

It is therefore considered important for companies to develop a deep understanding of their SaaS estate and regularly review the risks of supplier failure and put in place appropriate risk mitigation measures.

KLC recently worked with global information assurance specialist NCC Group to help it enhance the service model for its SaaS Assured service.

This article was originally posted on the Kemp Little Blog and can be found here.

…………………………………………………………………………………………………………………

John Parkinson, Global SaaS Business Leader at NCC Group will be speaking at the Cloud World Forum on 24th June 2015 at 12.45pm.

His talk will take place in Theatre D: Cloud, Data Governance & Cyber Security on ‘Outsourcing to Software as a Service? Don’t Overlook the Critical Commercial Security Risks.’

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

Scaling Your Application Efficiently – Horizontal or Vertical? #CloudWF

Guest Blog with AppDynamics

Author: Eric Smith at AppDynamics

Anyone deploying an application in production probably has some experience with scaling to meet increased demand. A generation ago, virtualization made scaling your application as simple as increasing your instance count or size. However, now with the advent of cloud, you can scale to theoretical infinity. Maybe you’ve even set up some auto-scaling based on underlying system metrics such as CPU, heap size, thread count, etc. Now the question changes from “Can I scale my environment to meet demand?” (if you add enough computing resources you probably can), to “How can I efficiently scale my infrastructure to accommodate my traffic, and if I’m lucky maybe even scale down when needed?” This is a problem I run into almost every day dealing with DevOps organizations.

If your application environment looks like this (if so, I’d love to be you):

Screen-Shot-2015-03-17-at-10.57.36-AM

You can probably work your way through to the solution, eventually. Run a bunch of load tests, find a sweet spot of machine size based on the performance under the test parameters, and bake it into your production infrastructure. Add more instances to each tier when your CPU usage gets high. Easy. What if your application looks like this?

Screen-Shot-2015-03-17-at-10.57.45-AM

What about when your application code changes? What if adding more instances no longer fixes your problem? (Those do cost money, and the bill adds up quickly…)

The complexity of the problem is that CPU bounding is only one aspect — most applications encounter a variety of bounds as they scale and they vary at each tier. CPU, memory, heap size, thread count, database connection pool, queue depth, etc. come into play from an infrastructure perspective. Ultimately, the problem breaks down to response time: how do I make each transaction as performant as possible while minimizing overhead?

The holy grail here is the ability to determine dynamically how to size my app server instances (right size), how many to create at each level (right scale) and when to create them (right time). Other factors come into play as well such as supporting infrastructure, code issues, and the database — but let’s leave that for another day.

Let me offer a simple example. This came into play recently when working with a customer analyzing their production environment. Looking at the application tier under light/normal load, it was difficult to determine what factors to scale, we ended up with this:

Screen-Shot-2015-03-17-at-10.57.54-AM

Response time actually decreases toward the beginning of the curve (possibly a caching effect?). But if you look at the application under heavier load, things get more interesting. All of a sudden you can start to see how performance is affected as demand on the application increases:

Screen-Shot-2015-03-17-at-10.58.02-AM

Looking at a period of heavy load in this specific application, hardware resources are actually still somewhat lightly utilized, even though response time starts to spike:

Screen-Shot-2015-03-17-at-10.58.12-AM
Screen-Shot-2015-03-17-at-10.58.24-AM

In this application, it appears that response time is actually more closely correlated with garbage collection than any specific hardware bound.

While there is clearly some future effort here to look at garbage collection optimization, in this case optimizing best fit actually comes down to determining desired response time, maximum load for a given instance size maintaining that response time, and cost for that instance size. In a cloud scenario, instance cost is typically fairly easy to determine. In this case, you can normalize this by calculating volume/(instance cost) at various instance sizes to determine a better sweet spot for vertical scale.

Horizontal scale will vary somewhat by environment, but this tends to be more linear — i.e. each additional instance adds incremental bandwidth to the application.

There’s still quite a bit more room for analysis of this problem, like resource cost for individual transactions, optimal response time vs. cost to achieve that response time, synchronous vs. asynchronous design trade-offs, etc. but these will vary based on the specific environment.

Using some of these performance indicators from the application itself (garbage collection, response time, connection pools, etc.) rather than infrastructure metrics, we were able to quickly and intelligently right size the cloud instances under the current application release as well as determine several areas for code optimization to help improve their overall efficiency. While the code optimization is a forward looking project, the scaling question was in response to a near term impending event that needed to be addressed. Answering the question in this way allowed us to meet both the near term impending deadline, but also remain flexible enough to accommodate any forthcoming optimizations or application changes.

Interested to see how you can scale your environment? Check out a FREE trial now!

…………………………………………………………………………………………………………………

John Rakowski, Chief Technology Strategist at AppDynamics will be speaking at the Cloud World Forum on 25th June 2015 at 2.25 pm.

His talk will take place in Theatre C: SDE & Hyperscale Computing on ‘Three Rules for the Digital Enterprise’. 

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

Telstra to offer SoftLayer cloud access to Australian customers #telcocloud

Source: Business Cloud News

imageedit_15_2192423347

Telstra and IBM have announced a partnership that will see the Australian telco offer access to SoftLayer cloud infrastructure to customers in Australia.

Telstra said that with the recent opening of IBM cloud datacentres in Melbourne and Sydney, the company will be able to expand its presence in the local cloud market by offering Australian businesses more choice in locally available cloud infrastructure services.

As part of the deal the telco’s customers will have access to the full-range of SoftLayer infrastructure services including bare metal servers, virtual servers, storage, security services and networking.

Erez Yarkoni, who serves as both chief information officer and executive director of cloud at Telstra said: “Telstra customers will be able to access IBM’s hourly and monthly compute services on the SoftLayer platform, a network of virtual data centres and global points-of-presence (PoPs), all of which are increasingly important as enterprises look to run their applications on the cloud.”

“Telstra customers can connect to IBM’s services via the internet or with a simple extension of their private network. By adding the Telstra Cloud Direct Connect offering, they can also access IP VPN connectivity, giving them a smooth experience between our Next IP network and their choice of global cloud platforms,” Yarkoni said.

Mark Brewer, general manager, IBM Global Technology Services Australia and New Zealand said: “Australian businesses have quickly realised the benefits of moving to a flexible cloud model to accommodate the rapidly changing needs of business today. IBM Cloud provides Telstra customers with unmatched choice and freedom of where to run their workloads, with proven levels security and high performance.”

Telstra already partners with Cisco on cloud infrastructure and is a flagship member of the networking giant’s Intercloud programme, but the company hailed its partnership with IBM as a key milestone in its cloud strategy, and may help bolster its appeal to business customers in the region.

_______________________________________________________________________

Hear more about Telco Cloud at the Telco Cloud Forum, taking place on 27th – 29th April at Radisson Blu Portman in London!

REGISTER YOUR FREE PASS HERE.

cloud-telco-200x200

Exclusive Interview with Liam Quinn, IT Director of Richmond Events

1c061f4Session: Cloud as a Utility: Working Seamlessly Across Public & Private Clouds

When: 24th June 2015, 12:05 – 12:25

Where: Employee Experience Theatre

Liam Quinn is IT Director of Richmond Events, pioneers of the one-to-one, pre-scheduled strategic business forums, aiming to match buyers with sellers.

We took a few minutes with him to talk about the challenges and status of Cloud specifically in the events sector, and the importance of SaaS versus IaaS and PaaS.

The interview…

So just to kick off, what do you feel are the unique challenges you face in the events sector?

Our challenges really are two-fold.  The first one we have is the fact that there’s an explosion of technology at the moment within the industry. The challenge lies in trying to work out what is good, helpful technology that’s going to enhance the experience of our attendees, and filter out the stuff which is really a lot of hype or, good today but maybe not very useful in the future.

In terms of our part in the industry, being a multinational organization, we’re operating events in four different countries to a consistent and very similar model.  So trying to make sure that we have the right technology in place that can support all four different business models is a challenge.
In terms of cloud technology specifically, how do you see the status of it in your sector in 2015? Do you feel it differs from other sectors? 

It’s hard to believe we’re very different to anyone else, but that may be a naïve way of looking at it.  I think the cloud is impacting the sector in two ways.  First of all, there are many software solutions that are being developed at the moment and being pushed within the marketplace, which are very cloud-based. So the economies of scale are there, and the price per event or price per attendee is very low. These systems are utilizing the cloud model in order for these software solutions to be implemented across every event organizer who wishes to use it.

The second place, which is where we come in and a lot of our foreign competitors, is whereby people are trying to consolidate their internal IT systems in order to provide a much more cost effective base for providing IT support to the business itself.
Leading on from that, would you therefore say SaaS is more imperative than IaaS or PaaS specifically for the events sector?

I think from a third-party solution perspective, most of the solutions being used are SaaS.  I don’t think event organizers want large IT teams, or want to be developing their own software.  So there’s a lot of software out there.  What they want is to consume it in any way they desire, in any location and that’s why they’re looking for software solutions available that they can just tap in, log in to, and work for their event.  We differ from that slightly in that all our systems are actually bespoke written for ourselves.

Download the full interview here!

Join Liam at the Cloud World Forum at London’s Olympia on the 24th of June for his session: Cloud as a Utility: Working Seamlessly Across Public & Private Clouds.

Don’t miss the chance to take advantage of all the knowledge and networking opportunities presented by EMEA’s only Cloud & DevOps exhibition.

Register for your FREE exhibition pass here!

CWF static banner

 

Exclusive Q&A with Fin Goulding, CIO of Paddy Power

fin2
Fin Goulding is Chief Information Officer at Irish bookmaker Paddy Power, and is speaking at Cloud World Forum at London’s Olympia on 24-25 June, about the Cloud and DevOps in his business. We took some time with Fin to talk about the challenges and status of Cloud in his sector, followed by an in-depth discussion on what DevOps means to him personally.

The interview…

We start off by talking about some of the challenges of being a CIO in the (largely online) gaming sector, one of which is that there are major (often sporting) events that happen at certain points in the year, and they have to be ready for those spikes in capacity demand.

Another major challenge in the sector is security, about which Fin asserts “we’re hyper-concerned about security in our world because we’re even more highly regulated than banking”. This is largely due to concerns about data loss, particularly in relation to the Cloud. When talking about this, he makes an excellent analogy: “if I put my bike in your house and it’s stolen, who’s responsible for that loss? It’s usually me”. This is a primary concern, and one about which Fin and his team have to be super diligent.

Sticking with Cloud technology, and the status of it within his sector, Fin feels that they are on a similar journey to many companies and industries, and that journey entails moving from “credit card Cloud” to “back office Cloud”. To elaborate, moving from niche Cloud use cases to IT teams working in a digital world, where they have back office systems (eg. HR, finance, ticketing) that are becoming “cloudified”, freeing the team up to spend more time on frontend work.

“But for us, like a number of companies, the next level will be enterprise level cloud, which is really a hybrid. It’s a capacity-on-demand model – recovery-as-a-service – or as Joe Baguley of VMware would call it, data center N+1, so that you’ve actually got this reliability in your production system.”

Download the full interview here!


Fin will be presenting in the Keynote Theatre
at the Cloud World Forum, at Olympia Grand in London on the 24th – 25th June 2015, on ‘A Transformational Journey: Implementing DevOps & Agile at Scale.

Don’t miss the chance to take advantage of all the knowledge and networking opportunities presented by EMEA’s only content-led Cloud exhibition.

Register for your FREE exhibition pass here!

CWF static banner

 

 

Exclusive Q&A with Ian Helliwell, Senior Architect at AstraZeneca

AAEAAQAAAAAAAACbAAAAJDdhY2Q5OTNjLWJlYzItNDZjMS04NTNmLWQyMzE0YzRkMWIxZg
Ian Helliwell is Senior Enterprise Architect at AstraZeneca.

He will be presenting in the Comms and Collaboration Theatre at Cloud World Forum at the Olympia in London (24-25 June) on Collaboration in the Cloud: Lessons Learnt from a Multinational, covering:

  • Understanding the experience of moving to a collaborative cloud in a large enterprise
  • Overcoming obstacles to opening up systems and openly sharing
  • Identifying and breaking down behavioural challenges
  • Leveraging the advantages at the end of the implementation journey

We took a few minutes with Ian to talk about his career and thoughts on the Cloud market, covering the specific challenges of the pharmaceutical industry, some of the innovative projects being undertaken at AstraZeneca, and the status of Cloud in his market, compared to other industry verticals.

Firstly, we discuss some of the challenges of the pharmaceutical sector.

What AstraZeneca aims for, compared to generic manufacturers, who take established off-patent drugs and market them as cheaply as possible, is the innovative specialist drug market. The challenge there is the length of time that it takes to get such a drug to market – “you get a 20 year patent on a new drug and yet it probably takes you at least 10 years to get it to market to go through all the clinical trials, effective medicine, the regulatory hurdles to get it to a point they can launch it to the market”. So these innovative drugs can be seen as very expensive, and AstraZeneca are therefore challenged on cutting costs down as much as possible, as well as on price, from the NHS or big healthcare providers in the US market, for example.

“So we have a big opportunity or a big challenge, whichever way you look at it, to try and make the best uses of the data and our processes to try and both cut down cost and also reduce those lead times.”

Another evolution that AstraZeneca and their pharmaceutical competitors are seeing is the degree of partnering in the industry. There are certain processes and products that other companies are betterplaced to develop, rather than them being developed in house. “There are small startup pharmaceutical research companies, BioPharms, the University spinoffs, the ones who have the creative ideas to develop a new drug but just don’t then have the size and weight of the company to take that drug through to market”.So they are looking at what is core to their business, and offloading anything that isn’t core to other organisations. From an IT perspective, a challenge of such partnerships is around sharing data easily across company boundaries. Speaking of data, another challenge is how to get the best out of their data. This is no longer just about information that they have within the company, but information that is available through other research organisations, academic institutions, and other published data.

“So it’s given a huge opportunity to use technologies and techniques for big data mining and to gather the intelligence from the massive amounts of data that this company has generated over many, many years.”

The amount of data generated for the pharmaceutical industry is only going to increase. There is therefore an opportunity to gather far more and far better patient-related data, with smart devices that patients can use to self-monitor, to help them administer their dosages, to record data. Previously the information that came back from the marketplace about the effectiveness of drugs was greatly averaged, but that data is becoming increasingly user-by-user specific.Leading on from this, we talk about some of the innovative, exciting projects that are currently being undertaken at AstraZeneca. Many people see the pharmaceutical industry as being particularly advanced in terms of IT and analytics, and these analytical models are helping scientists increase and improve the personalisation of medicine, looking at how a drug can be tailored to suit an individual rather than the attitude being “here’s a generic drug which everybody with that disease takes and hopefully it will have an overall good effect”.

Read the full interview here


Ian will be presenting in the Comms and Collaboration theatre at the Cloud World Forum, at Olympia Grand in London on the 24th – 25th June 2015, on ‘Collaboration in the Cloud: Lessons Learnt from a Multinational.’

Don’t miss the chance to take advantage of all the knowledge and networking opportunities presented by EMEA’s only content-led Cloud exhibition.

Register for your FREE exhibition pass here!

 

Tag Cloud

%d bloggers like this: