Posts tagged ‘cloud infrastructure’

Tackling the resource gap in the transition to hybrid IT

Is hybrid IT inevitable? That’s a question we ask customers a lot. From our discussions with CIOs and CEOs there is one overriding response and that is the need for changimagese. It is very clear that across all sectors, CEOs are challenging their IT departments to innovate – to come up with something different.

Established companies are seeing new threats coming into the market. These new players are lean, hungry and driving innovation through their use of IT solutions. Our view is that more than 70 percent of all CEOs are putting a much bigger ask on their IT departments than they did a few years ago.

There has never been so much focus on the CIO or IT departmental manager from a strategic standpoint. IT directors need to demonstrate how they can drive more uptime, improve the customer experience, or enhance the e-commerce proposition for instance, in a bid to win new business. For them, it is time to step up to the plate. But in reality there’s little or no increase in budget to accommodate these new demands.

We call the difference between what the IT department is being asked to do, and what it is able to do, the resources gap. Seemingly, with the rate of change in the IT landscape increasing, the demands on CIO’s by the business increasing and with little or no increase in IT budgets from one year to the next, that gap is only going to get wider.

But by changing their way of working, companies can free up additional resources to go and find their innovative zeal and get closer to meeting their business’ demands. Embracing Hybrid IT as their infrastructure strategy can extend the range of resources available to companies and their ability to meet business demands almost overnight.

Innovate your way to growth

A Hybrid IT environment provides a combination of its existing on-premise resources with public and private cloud offerings from a third party hosting company. Hybrid IT has the ability to provide the best of both worlds – sensitive data can still be retained in-house by the user company, whilst the cloud, either private or public, provides the resources and computing power that is needed to scale up (or down) when necessary.

Traditionally, 80 percent of an IT department’s budget is spent just ‘keeping the lights on’. That means using IT to keep servers working, powering desktop PCs, backing up work and general maintenance etc.

But with the CEO now raising the bar, more innovation in the cloud is required. Companies need to keep their operation running but reapportion the budget so they can become more agile, adaptable and versatile to keep up with today’s modern business needs.

This is where Hybrid IT comes in. Companies can mix and match their needs to any type of solution. That can be their existing in-house capability, or they can share the resources and expertise of a managed services provider. The cloud can be private – servers that are the exclusive preserve of one company – or public, sharing utilities with a number of other companies.

Costs are kept to a minimum because the company only pays for what they use. They can own the computing power, but not the hardware. Crucially, it can be switched on or off according to needs. So, if there is a peak in demand, a busy time of year, a last minute rush, they can turn on this resource to match the demand. And off again.

This is the journey to the Hybrid cloud and the birth of the agile, innovative market-focused company.

Meeting the market needs

Moving to hybrid IT is a journey.  Choosing the right partner to make that journey with is crucial to the success of the business. In the past, businesses could get away with a rigid customer / supplier relationship with their service provider. Now, there needs to be a much greater emphasis on creating a partnership so that the managed services provider can really get to understand the business. Only by truly getting under the skin of a business can the layers be peeled back to reveal a solution to the underlying problem.

The relationship between customer and managed service provider is now also much more strategic and contextual. The end users are looking for outcomes, not just equipment to plug a gap.

As an example, take an airline company operating in a highly competitive environment. They view themselves as being not in the people transportation sector, but as a retailer providing a full shopping service (with a trip across the Atlantic thrown in). They want to use cloud services to take their customer on a digital experience, so the minute a customer buys a ticket is when the journey starts.

When the passenger arrives at the airport, they need to check in, choose the seats they want, do the bag drop and clear security all using on-line booking systems. Once in the lounge, they’ll access the Wi-Fi system, check their Hotmail, browse Facebook, start sharing pictures etc. They may also choose last minute adjustments to their journey like changing their booking or choosing to sit in a different part of the aircraft.

Merely saying “we’re going to do this using the cloud” is likely to lead to the project misfiring. As a good partner the service provider should have the experience of building and running traditional infrastructure environments and new based on innovative cloud solutions so that they can bring ‘real world’ transformation experience to the partnership. Importantly they must also have the confidence to demonstrate digital leadership and understand of the business and its strategy to add real value to that customer as it undertakes the journey of digital transformation.

Costs can certainly be rationalised along the way. Ultimately with a hybrid system you only pay for what you use. At the end of the day, the peak periods will cost the same, or less, than the off-peak operating expenses. So, with added security, compute power, speed, cost efficiencies and ‘value-added’ services, hybrid IT can provide the agility businesses need.

With these solutions, companies have no need to ‘mind the gap’ between the resources they need and the budget they have. Hybrid IT has the ability to bridge that gap and ensure businesses operate with the agility and speed they need to meet the needs of the competitive modern world.

Written by Jonathan Barrett, Vice President of Sales, CenturyLink, EMEA

Advertisements

Google and Why the New Standard for Modern Applications is a Non-Relational Database Deployed in the Cloud #CloudWF

Guest Blog with MongoDB

Google and Why the New Standard for Modern Applications is a Non-Relational Database Deployed in the Cloud

Author: Kelly Stirman, VP of strategy at MongoDB

It’s positively raining cloud stories. Sorry. Cloud puns are so over…cast. Regardless, recent months have seen some interesting developments in the high stakes game for control of the foundational layer of your application stack i.e. what database you use and where it’s deployed. In early May Google released Cloud BigTable as a managed NoSQL database. Two weeks later Gartner released its  Magic Quadrant for Cloud Infrastructure as a Service (Cloud IaaS) report.

While unrelated, the two announcements both shine some light on our path to a new, cloud-rich future. While the aspiring cloud giant Google gave further validation, if it were needed, that  NoSQL databases deployed in the cloud are the new standard for modern applications.

You see, the workload from modern applications is quite different from what it’s been in the past. Building your own data centre and installing a relational database was fine when you could predict the size, speed and type of data. Applications in 2015 are a different breed. The growth of social, mobile and sensor data has dramatically altered the way we approach development. Developers can’t tell in advance what any of this will look like in the final production version of their application, let alone future iterations.

Many organisations are already overcoming this by deploying non-relational databases on commodity hardware in the cloud. This approach lets companies gear up for massive scale and gives them enough flexibility to incorporate new data types that will support business processes and provide operational insight.

Google’s BigTable and Gartner’s Magic Quadrant

Google’s announcement highlighted two things. One: the big infrastructure players are looking to diversify and find new ways to wring revenue from the big data stack. Two: BigTable’s release illustrated that all major data innovation is happening away from relational data models.  Relational databases aren’t going anywhere fast, but they are challenged by the requirements of modern applications. In particular the trickiest of the three Vs of big data – variety of data types. BigTable is yet another database-as-a-service offering that is designed to be deployed on the vendor’s own cloud infrastructure, see also Amazon and Microsoft.

From one cloud provider’s announcement, we now look at a broader view of the industry from Gartner. This is from the introduction to the Magic Quadrant for Cloud Infrastructure as a Service, Worldwide report[1]:

The market for cloud IaaS is in a state of upheaval, as many service providers are shifting their strategies after failing to gain enough market traction. Customers must exercise caution when choosing providers.

The report went on to explain that ‘all the providers evaluated are believed to be financially stable, with business plans that are adequately funded. However, many of the providers are undergoing significant re-evaluation of their cloud IaaS businesses’. In other words, some vendors may not be in it for the long haul.

What it means for you

As well as diversification into database services, the cloud competition is also sparking a healthy price war. Just a few days after the Magic Quadrant was released Google announced it was slashing prices by as much as 30%. Microsoft and Amazon are also fond of aggressive pricing as they try to eat as much market share as possible.

Which brings us back to Google’s launch of NoSQL database-as-a-service BigTable. The release came on the back of Microsoft’s recent Azure DocumentDB announcement and, of course Amazon’s own DynamoDB offering. As the competition for cloud infrastructure drives margins down, the big players are looking up the stack to drive revenue and it’s clear NoSQL technology is one of the most attractive areas.

Though it’s worth pointing out that these as-a-service database offerings generally come with a very narrow set of features. For example Cloud BigTable is a wide column store with a simple key-value query model. Like some other NoSQL databases, it is limited by:

  • A complex data model which presents a steep learning curve to developers, slowing the rate of new application development
  • Lack of features such as an expressive query language (key-value only), integrated text search, native secondary indexes, aggregations and more. Collectively, these enable organisations to build more functional applications faster

Ultimately the cloud providers can relieve users of some of the overhead of running a database but they still will have to deal with the complexity of mastering data models and working around key-value query limitations.

Out of the chaos it’s becoming clear that a non-relational database hosted in the cloud, is going to be the predominant way modern companies deploy applications. Each customer will have varying demands of control. Some will want everything ‘as-a-service’, others will want full control over how and where their database runs and security on each layer of the stack. In the modern world of cloud-ready, non-relational databases, you have more choice than ever. That choice can also bring a risk of vendor lock-in, if you select an offering that is tied to one specific platform, no matter how ‘web-scale’ that platform claims to be.

—–

[1] Gartner, Magic Quadrant for Cloud Infrastructure as a Service, Worldwide, Lydia Leong et al, May 18, 2015

…………………………………………………………………………………………………………………

MongoDB will be exhibiting at the Cloud World Forum taking place on the 24th & 25th June 2015.

Kelly Stirman is Vice President of Strategy at MongoDB, speaking at the Cloud World Forum on the 25th June at 10.35 in Theatre A: Keynote – Building Business in the Cloud on ‘Escaping Cloud Cuckoo Land: 5 Tips for Making Success a Reality in the Cloud.’

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

Risks of SaaS supplier failure & how to effectively mitigate them #CloudWF

Guest Blog with Kemp Little Consulting & NCC Group

The cloud is here to stay and according to a recent survey, organisations are going to be investing more in cloud services to support their core business operations.

But have companies properly considered the risks of SaaS supplier failure if the software is supporting their core processes?

The Kemp Little Consulting (KLC) team has been working with NCC Group to identify some of the risks of SaaS supplier failure and to identify the main problems that end user organisations would need to solve to effectively mitigate these risks.

In the on-premise world, the main way of mitigating against software supplier failure is Software Escrow. This was designed as a means of gaining access to source code for an application in the event of supplier failure.

If a supplier goes bust, there is no short term problem as the application and the business processes supported by the application continue to work and the corporate data remains within the control of the end user.

However, the end user company has a  problem as they will not be able to maintain the application long term and this issue is effectively solved by Software Escrow and related services such as verification.

In the cloud arena, however, the situation is different. If the supplier fails there is potentially an immediate problem of the SaaS service being switched off almost straightaway because the software supplier no longer has the cash to continue to pay for its hosting service or to pay its key staff.

For the end user, this means that they no longer have access to the application; the business process supported by the application can no longer operate and the end user organisation loses access to their data.

The business impact of this loss will vary depending upon the type of application affected:

  • Business Process Critical (e.g. finance, HR, sales and supply chain)
  • Data Critical (e.g. analytics or document collaboration)
  • Utility (e.g. web filtering, MDM, presentational or derived data)

In our research, we found that both suppliers of cloud solutions and end user organisations had not properly thought through the implications of these new risks, nor the services they would require to mitigate against the risk of supplier failure.

The primary concerns that end user customers had were around their business critical data. They were concerned by lack of access to data; loss of data; the risk of compliance breach by losing control of their data and how they might re-build their data into usable form if they could get it back. There was also concern about access to funding to keep the infrastructure running in the SaaS vendor in order to buy time to make alternative arrangements.

They were much less concerned about access to the application or getting access to the source code.

This is understandable as their primary concern would be getting their data back and porting it to another solution to get the business back up and running.

In a separate part of our study, the Kemp Little commercial team looked at the state of the market of the provisions generally found in SaaS contracts to deal with the event of supplier failure.  The team found that even if appropriate clauses were negotiated into the contract at the outset, there may be real difficulties in practically enforcing those terms in an insolvency situation.

End user organisations were more concerned than SaaS suppliers about their capability to deal with all of these problems and were amenable to procuring services from third parties to help them mitigate the risks and solve the problems they could not solve purely by contractual means.

End users were also concerned that many SaaS solutions are initially procured by “Shadow-IT” departments as part of rapid business improvement projects and deployed as pilots where the business risks of failure are low.

However, these solutions can often end up being rolled out globally quite quickly and key parts of the business become dependent upon them by stealth.

It is therefore considered important for companies to develop a deep understanding of their SaaS estate and regularly review the risks of supplier failure and put in place appropriate risk mitigation measures.

KLC recently worked with global information assurance specialist NCC Group to help it enhance the service model for its SaaS Assured service.

This article was originally posted on the Kemp Little Blog and can be found here.

…………………………………………………………………………………………………………………

John Parkinson, Global SaaS Business Leader at NCC Group will be speaking at the Cloud World Forum on 24th June 2015 at 12.45pm.

His talk will take place in Theatre D: Cloud, Data Governance & Cyber Security on ‘Outsourcing to Software as a Service? Don’t Overlook the Critical Commercial Security Risks.’

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

Scaling Your Application Efficiently – Horizontal or Vertical? #CloudWF

Guest Blog with AppDynamics

Author: Eric Smith at AppDynamics

Anyone deploying an application in production probably has some experience with scaling to meet increased demand. A generation ago, virtualization made scaling your application as simple as increasing your instance count or size. However, now with the advent of cloud, you can scale to theoretical infinity. Maybe you’ve even set up some auto-scaling based on underlying system metrics such as CPU, heap size, thread count, etc. Now the question changes from “Can I scale my environment to meet demand?” (if you add enough computing resources you probably can), to “How can I efficiently scale my infrastructure to accommodate my traffic, and if I’m lucky maybe even scale down when needed?” This is a problem I run into almost every day dealing with DevOps organizations.

If your application environment looks like this (if so, I’d love to be you):

Screen-Shot-2015-03-17-at-10.57.36-AM

You can probably work your way through to the solution, eventually. Run a bunch of load tests, find a sweet spot of machine size based on the performance under the test parameters, and bake it into your production infrastructure. Add more instances to each tier when your CPU usage gets high. Easy. What if your application looks like this?

Screen-Shot-2015-03-17-at-10.57.45-AM

What about when your application code changes? What if adding more instances no longer fixes your problem? (Those do cost money, and the bill adds up quickly…)

The complexity of the problem is that CPU bounding is only one aspect — most applications encounter a variety of bounds as they scale and they vary at each tier. CPU, memory, heap size, thread count, database connection pool, queue depth, etc. come into play from an infrastructure perspective. Ultimately, the problem breaks down to response time: how do I make each transaction as performant as possible while minimizing overhead?

The holy grail here is the ability to determine dynamically how to size my app server instances (right size), how many to create at each level (right scale) and when to create them (right time). Other factors come into play as well such as supporting infrastructure, code issues, and the database — but let’s leave that for another day.

Let me offer a simple example. This came into play recently when working with a customer analyzing their production environment. Looking at the application tier under light/normal load, it was difficult to determine what factors to scale, we ended up with this:

Screen-Shot-2015-03-17-at-10.57.54-AM

Response time actually decreases toward the beginning of the curve (possibly a caching effect?). But if you look at the application under heavier load, things get more interesting. All of a sudden you can start to see how performance is affected as demand on the application increases:

Screen-Shot-2015-03-17-at-10.58.02-AM

Looking at a period of heavy load in this specific application, hardware resources are actually still somewhat lightly utilized, even though response time starts to spike:

Screen-Shot-2015-03-17-at-10.58.12-AM
Screen-Shot-2015-03-17-at-10.58.24-AM

In this application, it appears that response time is actually more closely correlated with garbage collection than any specific hardware bound.

While there is clearly some future effort here to look at garbage collection optimization, in this case optimizing best fit actually comes down to determining desired response time, maximum load for a given instance size maintaining that response time, and cost for that instance size. In a cloud scenario, instance cost is typically fairly easy to determine. In this case, you can normalize this by calculating volume/(instance cost) at various instance sizes to determine a better sweet spot for vertical scale.

Horizontal scale will vary somewhat by environment, but this tends to be more linear — i.e. each additional instance adds incremental bandwidth to the application.

There’s still quite a bit more room for analysis of this problem, like resource cost for individual transactions, optimal response time vs. cost to achieve that response time, synchronous vs. asynchronous design trade-offs, etc. but these will vary based on the specific environment.

Using some of these performance indicators from the application itself (garbage collection, response time, connection pools, etc.) rather than infrastructure metrics, we were able to quickly and intelligently right size the cloud instances under the current application release as well as determine several areas for code optimization to help improve their overall efficiency. While the code optimization is a forward looking project, the scaling question was in response to a near term impending event that needed to be addressed. Answering the question in this way allowed us to meet both the near term impending deadline, but also remain flexible enough to accommodate any forthcoming optimizations or application changes.

Interested to see how you can scale your environment? Check out a FREE trial now!

…………………………………………………………………………………………………………………

John Rakowski, Chief Technology Strategist at AppDynamics will be speaking at the Cloud World Forum on 25th June 2015 at 2.25 pm.

His talk will take place in Theatre C: SDE & Hyperscale Computing on ‘Three Rules for the Digital Enterprise’. 

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

Telstra wraps up Pacnet acquisition

australia-asiapac-connect

The Telstra/Pacnet acquisition story which broke towards the end of last year has now come to fruition, with the Australian telco today announcing the completed acquisition of the Cloud, managed services and data centre provider. As reported by Telecoms.com in December, the valuation of the deal came in at $697 million.

When initially announced, the deal came with the stipulation of agreement from regulatory bodies, as well as Pacnet financier approval. According to Telstra, all necessary approvals and agreements have now been confirmed, and the firm can now begin the full acquisition of Pacnet.

All that remains, it claims, is full regulatory approval in the United States, which it reckons is expected in due course and will not impact operations or the agreed purchase price.

Speaking of the acquisition, Telstra’s Global Enterprise and Services CEO, Brendon Riley, said the integration of Pacnet will see its brand gradually retired, but that the Chinese market remains a big focus for the joint-venture.

“The addition of Pacnet’s staff, intrastructure, technology and expertise will position Telstra as a leading provider of services to multinational and large companies in Asia,” he said. “The completed acquisition will double Telstra’s customers in Asia, and greatly increase our network reach and data centre capabilities across the region. This includes the addition of the largest privately owned intra-Asia cable network, 29 data centres and the ability to further grow our China operations through existing joint venture.”

Riley concluded with a nod towards the Pacnet Enabled Network (PEN), an elastic and on-demand network based on SDN architecture, pioneered by Pacnet. PEN was one of the first live SDN-based networks launched globally.

“The acquisition provides us greater specialisation and scale, including the delivery of enhanced services, such as software-defined networking and opens up significant incremental opportunities for our business,” he said.

……………………………………………………………………………………………………………………………………………

Visit the Cloud World Forum taking place on the 24th – 25th June 2015 at Olympia Grand in London.

Don’t miss the chance to take advantage of all the knowledge and networking opportunities presented by EMEA’s only content-led Cloud exhibition.

Register for your FREE exhibition pass here!

CWF static banner

The future call centre: 10 predictions for the next 10 years

Guest Blog with NewVoiceMedia

Video-service-198x300What will the call centre of 2025 look like?

Well, to start with, it’s unlikely to be a physical ‘centre’ anymore. The rise of cloud technology is predicted to lead to an increase in remote working. But this move outside the office walls is far from businesses shunning the contact centre.

The omnipresent eye of social media has put companies in the limelight – for good and for bad, pushing customer service right to the top of the priority list. As a result,  looks set to become a key differentiator from now onwards, and the call centre will be at the forefront of this strategy.

Here we explore the trends that look set to transform the call centre in ten years’ time.

1. The call center will become a ‘relationship hub’

For years, many have considered the call centre as a way of dealing with immediate problems. This led to a short-term strategy of dealing with one customer emergency after another – reacting instead of adapting to the needs of the customer. Instead of picking up the pieces when things go wrong, we predict that the contact centre will become an integral part of business strategy, acting as a ‘relationship hub’.

Contact centre agents are the first to know if something isn’t working and are therefore perfectly poised to advise the business. It’s the people on the other end of the phone that know what the customers really think. Customer service can be seen as an afterthought – what happens after the marketing department has reeled them in, but really, it should be part of every stage of business development, supplying sales and marketing with repeat purchasers and advocates, as well as an essential data point for product management and development.

2. Customer service agents will become ‘super agents’

As the call centre becomes an increasingly important part of the business, so do the people that work there. They will need to adapt their skillset to meet the demands of the future customer and the expectations directors place on the contact centre. Plus, with the rise of ‘self-help’ and user communities, only the most complex problems will end up in a call centre. Agents will need to be ready to tackle challenging issues and be able to unpick the situation to pinpoint what exactly went wrong.

It’s therefore not surprising that in the next ten years, the average customer service agent will need to have a much wider range of skills. Aside from excellent communication skills, they’ll need analytical problem-solving skills, project management – and in some cases, technical training, in order to understand the finer details of the product or service. Alongside all of this, customer service agents will need to be able to adapt to changes in technology – from becoming an expert in every new app and social network, to utilising the increasing range of data on their CRM.

3. Call routing systems will find the ‘perfect match’ 

Intelligent call-routing is already available now, but it’s predicted to grow in the next ten years – matching the customer with the right expert almost instantly. As CRM and workflow management systems develop, a complex ‘match-making’ process will occur every time a customer calls, to ensure the right expert is on hand to solve every problem. Many also believe that organisations will begin to publish their agents’ availability online, so that customers can pick the agent that best suits their needs and call them directly.

4. Web chat will become an increasingly popular customer service channel

It can be frustrating to be on the other end of a phone – whether you’re an agent or a customer, the channel has its limits. The success of Amazon Mayday has made video-based live chat a real possibility. The channel has huge potential, because it allows agents to develop a more personal connection with customers through face-to-face chat. Plus, have you ever wanted to show a customer how something works? With video chat, this becomes a possibility. It also eliminates the idea of being put on hold – even if the agent isn’t speaking, the customer is connected via the visual feed. Video web chat also allows contact centres to anticipate problems as customers navigate their website and ensure the right agent pops up at the right time.

5. Customer service will become the key differentiator

With the rise of intangible products, which only exist via your mobile or laptop, customer experience is becoming more important as a differentiator. Consumers don’t just want great customer service, they demand it. In the UK, half of consumers said they would buy from a competitor as the result of poor customer experience. This is similar in the US, with 44% of consumers taking their business elsewhere as a result of inadequate service.

Plus, with the death of sustainable competitive advantage, companies can no longer rely on their well-defined niche to keep them ahead. The elusive ‘experience’ becomes more important and customer service moves straight to the top of the agenda. Add to this the growth of social media and customer service has transformed from a one-to-one interaction to a public conversation. With customer service becoming this transparent, companies have realised they need to up their game. You can no longer hide bad customer service behind closed doors; every business has an online footprint of their successes and failures for all to see. As a result, companies will start to compete to offer the best customer service – with social media recommendations being the ultimate prize.

6. Mobile is the future – for customer service agents and customers

According to the Economist, mobile apps are predicted to become the second most important channel for engaging with brands – just behind social media. And it’s not just about apps, as the mobile phone becomes an increasingly important part of everyday life. It’s how your customers are most likely to get in contact with you – via email, live chat, social media or in a voice call. Companies need to optimise their mobile functionality for this – particularly by allowing customers to multi-task on their mobile. For instance, being able to read the FAQs page while on the phone to the customer service agent. Your customer service agents will make the same demands for mobile. Being able to access a mobile CRM is a key ingredient for flexible working.

7. Expect channel preferences to change (and change again)

As consumers demand a personalised approach to just about everything – they expect to be able to mix & match the customer service channels to create a tailor-made service. However, it’s becoming increasingly hard to predict and plan for the channel-hopping. That’s why we predict that whatever the preference is at the moment, it will change in the next ten years – probably several times. How contact centres are able to adapt to customers switching between channels will determine their success.

This is particularly true if businesses want to appeal to the millennial generation, who are notorious for channel-switching, as they move from mobile to tablet to laptop, all in a matter of hours. Being able to follow those channel hops while maintaining the context of the interaction is key to customer service success. And it’s not just about keeping up with the change in device or channel, businesses need to keep up with the technology itself. New apps and social networks are launched all the time – WhatsApp is a great example of a channel that’s taken off rapidly and is becoming a popular choice for customer service.

8. Voice biometrics will replace security questions

“What’s your mother’s maiden name?” is one of many common security questions, but in the next ten years, it’ll be more about how the customer answers a question than the answer itself which confirms their identity. Gathering the unique ‘voiceprints’ of your customers could be the answer to security problems, as voice biometrics technology develops. It’s much harder to replicate the human voice than it is to steal facts about a customer. Voice biometrics record the intricacies of the human voice – from picking up on the size and shape of the mouth to the tension of the vocal cords.

9. Remote working and location-based services will increase

With the rise of cloud-based SaaS, having all your agents in one place is no longer necessary. It’s actually much more than unnecessary – switching to remote working agents has lots of benefits. This approach can reduce the costs associated with running a call centre and give employees greater flexibility. It is predicted that the growing number of virtual call centres could lead to more location-based services. For instance, a customer calling a company could be automatically connected to an agent working remotely a few miles from their location. The agent could even arrange to meet the customer if necessary, which could be very useful for certain sectors.

10. The “internet of things”

Described by many as the third great wave of computing – the “internet of things” or the “internet of everything” could change the way the world works. With more and more devices being able to connect to other devices or people independently, it gives rise to a world where almost everything is connected. This could have huge implications for the contact centre, enabling businesses to deliver pre-emptive service. For instance, if a patient’s heart monitor is over-heating, the device could send an automated service request to the right team. On a more domestic level, washing machines may be able to self-diagnose problems and notify the manufacturer when the part needs replacing – taking the customer out of the equation altogether.

The implication is that attitudes will shift – instead of buying a product, consumers will be buying a product with built-in customer service, raising the stakes for getting service right.

……………………………………………………………………………………………………………………………………………

NewVoiceMedia are Salesforce Pavillion Partner and exhibitors at Cloud World Forum, taking place on the 24th – 25th June 2015 at Olympia Grand in London.

Don’t miss the chance to take advantage of all the knowledge and networking opportunities presented by EMEA’s only content-led Cloud exhibition.

Register for your FREE exhibition pass here!

CWF static banner

 

US Army deploys hybrid cloud for logistics data analysis

US-Army-300x200The US Army is partnering with IBM to deploy a hybrid cloud platform to support data warehousing and data analysis for its Logistics Support Activity (LOGSA) platform, the Army’s logistics support service.

LOGSA provides logistics information capabilities through analytics tools and BI solutions to acquire, manage, equip and sustain the materiel needs of the organisation, and is also the home of the Logistics Information Warehouse (LIW), the Army’s official data system for collecting, storing, organizing and delivering logistics data.

The Army said it is working with IBM to deploy LOGSA, which IBM said it is the US federal government’s largest logistics system, on an internal hybrid cloud platform in a bid to improve its ability to connect to other IT systems, broaden the organisation’s analytics capabilities, and save money (the Army reckons up to 50 per cent).

Anne Altman, General Manager for U.S. Federal at IBM said:

“The Army not only recognized a trend in IT that could transform how they deliver services to their logistics personnel around the world, they also implemented a cloud environment quickly and are already experiencing significant benefits. They’re taking advantage of the inherent benefits of hybrid cloud: security and the ability to connect it with an existing IT system. It also gives the Army the flexibility to incorporate new analytics services and mobile capabilities.”

The Cloud World Forum will be taking place on 24th – 25th June at Olympia Grand in London.

Tag Cloud

%d bloggers like this: