Guest Blog with Avangate

Author: Michael Ni, CMO/SVP, Marketing and Products, Avangate

Sometimes it seems like just yesterday that everything was getting “cloud-ified,” from photo sharing to customer relationship management, but the move to the cloud is actually a couple of years old these days. But now that we all have our documents stored in the cloud (and our heads out of the clouds), everybody’s looking for a clear path toward success in the latest trend: the Internet of Things.

Just like the cloud before it, the Internet of Things is now top of mind for software professionals. Its promise has been nascent for a long time: although Dick Tracy’s 2-Way Wrist Radio first appeared in 1946, connected devices like the FitBit and Apple Watch are just starting to get in the hands – or on the wrists – of everyday folks.

With broader adoption of connected devices come both opportunities and challenges. Even the companies that are able to sell IoT hardware successfully find themselves needing to develop and monetize complementary services to help users get the most out of their devices. And software-focused companies that don’t have devices need new a way to get in on the IoT and the billions it’s expected to bring in. That way is through data.

While the IoT started out with connected sensors, it soon became clear that simply sensing data wouldn’t be enough. Just like storing content in the cloud also required building interfaces that made it easy for users to access cloud content, IoT sensors now need to produce data that’s easy for people to find, understand and use. And because IoT data is so valuable (not to mention expensive), there needs to be a way for companies to monetize it. So if wave 1 of the IoT trend involved simply creating the sensors, wave 2 involves monetizing them and the data they create.

As a result, more and more software vendors have started staking a claim in the IoT. At Avangate, we’ve been helping companies like Bitdefender monetize their IoT offerings. Bitdefender offers a “security of things” solution called BOX, a small device that scans for IoT threats on a local WiFi connection. By monitoring the way your smart devices stay connected, BOX finds and protects against possible threats to your connected information. By helping Bitdefender easily monetize its entry into the IoT, including not only the device itself but also associated data, we’re showing the importance and ease of monetizing IoT devices and the data they produce.

And that’s the key: commerce absolutely has to run in the background of every IoT play. No matter how affordable a device is up front, or if streams of data are free for now, devices and data both cost a significant amount to create, maintain, and provide in ways that really work for consumer and business customers. As a result, to truly succeed in the IoT, software companies need to be able to package and sell data derived from connected devices in ways that will benefit other entities as well.

In the end, it’s clear that that the desperate need for IoT data monetization is actually a massive opportunity. Companies are still scrambling to create devices and support data, and not enough entities are thinking about how to monetize it. Those who find themselves able to successfully package and sell information in the IoT era may find themselves enjoying Salesforce style status and riding high on the wave of the future as the IoT truly takes off.

…………………………………………………………………………………………………………………

Avangate will be exhibiting at the Cloud World Forum on Stand D48, taking place on the 24th – 25th June 2015.

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

Advertisements

Guest Blog with Equinix

Enterprise cloud usage is nearly universal, but there’s still significant room for cloud growth.

That sums up one of the key findings of RightScale’s 2015 “State of the Cloud Report.” The survey of 930 technical professionals indicates the enterprise has moved past its initial cloud skittishness and is getting quite comfortable investigating what the cloud can really do.

Capture

The survey showed 93% of respondents have adopted cloud, roughly the same as the prior year. Hybrid cloud is also the preferred strategy of 58% of respondents, compared to 30% who are public cloud-only and 5% who are private cloud-only.

One key difference from 2014 is that 38% of cloud users are now classified by RightScale as “cloud explorers,” compared to 25% just a year ago when “cloud beginners” was the biggest category. “Cloud explorers” already have multiple projects and applications in the cloud and are looking to expand and improve their cloud use.

The survey also found plenty of room for cloud expansion, with 68% of enterprise respondents reporting that less than a fifth of their applications are currently running in the cloud. Most respondents (55%) also report that another fifth of their applications are already built on cloud-friendly architectures.

Here’s more of what we found most interesting in the State of the Cloud report:

Going public, staying private

Public cloud is being used by 88% of organizations, while 63% are using private cloud. But private cloud is still carrying a heavier workload, with 13% of enterprises running more than 1,000 virtual machines (VMs) in the public cloud and 22% running more than 1,000 virtual machines in private cloud. The survey also indicated enterprises are expecting to grow public cloud workloads more quickly.

Central IT gets more cloud comfortable

The survey authors note that in 2014, business units envisioned a more limited role for central IT in cloud purchasing decisions, likely because they felt central IT was generally too cautious. But central IT’s view of the cloud may be evolving. The survey indicated central IT concerns about cloud security have dropped, with 41% now reporting it as a significant challenge, compared to 47% a year ago. In addition, 28% of central IT respondents report public cloud as the top priority in 2015, compared to 18% in 2014.

More of the same

Respondents cited the same cloud benefits and challenges in 2015, but in many cases mentioned them more frequently. For instance, “greater stability,” “faster access to infrastructure,” and “high availability” were again the top three benefits, but each was cited by a greater percentage of respondents:

Capture2

A similar pattern was seen when respondents were asked about cloud challenges. “Security,” “lack of resources/expertise” and “compliance” again appeared as major concerns, but were referred to by a greater percentage of respondents, compared to last year:

Capture3

Learn more about how Equinix can help your enterprise realize cloud benefits and meet cloud challenges.

…………………………………………………………………………………………………………………

Equinix will be at the Cloud World Forum on Stand D170, taking place on the 24th – 25th June 2015. Don’t miss their session on ‘An Expedition through the Cloud’ in the Employee Experience Theatre at 10.35am on Day 2.

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

Guest Blog with Intermedia

The 6 hidden costs of cloud IT services

So you’re considering moving email, file management, or archiving to the cloud. You even have quotes from a few providers you’re checking out. Great! It’s a step in the right direction to support your company’s growth. But be careful: what you end up paying might not always match your quote. There are costs beyond the monthly service fee.

The good news is that those hidden costs are avoidable. To help you with your due diligence, we compiled a list of the costs you may encounter.

  1. The cost of migrating data to the new service

Let’s say you’re switching email providers. You might think data migration is free. And it might even be—in the sense that it’s not a line item in the invoice. But if you have to do it yourself, it will cost the valuable time of your IT staff. And what if you need assistance? Some providers may only help for a fee, and others will refer you to a third-party consultant. So make sure you ask about data migration, and make sure your provider will includes white-glove service for free.

  1. The cost of downtime imposed by low reliability

When an essential IT service is unavailable, your business incurs extremely high costs: your employees can’t do their jobs, your customers get angry, you lose sales, and IT resources are diverted to cope with the crisis. Many providers promise 99.9% uptime. And while this may sound good, it actually adds up to more than 525 minutes of unplanned downtime per year. Consider this and make sure you settle for no less than a 99.999% uptime guarantee— which is less than 30 seconds of downtime a month.

  1. The cost of not getting enough support

When you’re experiencing a problem, regardless of its severity, you need quick answers or your productivity suffers. You can’t be productive if you’re on hold—or if you’re pushed to self-help support portals. However, many providers only offer phone support for critical or tier 2 issues. Even then, support centers are often outsourced or staffed with non-certified personnel. These factors add up to costly unproductive time. A good support plan will include 24/7 live support, short hold times, and skilled, certified staff.

  1. The cost of sub-par security and protection

Security breaches are not just a costly drain on time, they create risk that could hurt your business. So where security is concerned, you must be confident that your business cloud provider has you covered. However, many providers use lesser-known security tools and fewer still help respond to eDiscovery requests. Make sure you get the nitty-gritty details on security procedures from your provider and don’t settle on less than the gold standard and the best-known names.

  1. The cost of management inefficiency

Your cloud management console should be powerful enough to support your IT needs, but simple enough to use that you can easily use it—and, indeed, that you can delegate certain tasks to non-technical resources. Otherwise your IT staff is wasting precious time on tasks that should be trivial. When you look at management consoles, they can be quite complicated and most provide no ability to manage additional third-party services. Make sure you get a solution that balances ease-of-use with granular control to avoid imposing undue labour costs on your IT team.

  1. The cost of services that lack integration

Your business is probably adding more and more cloud services. But as you add more services, you introduce more support, billing and management complexities. And so you end up in a tangle of services that you have to untie. Compare this to top providers with integrations that let you share user and device settings across services. Without this, the cost of managing your IT can skyrocket.

Choose your cloud provider carefully.

As the customer, you have a choice. Choose a cloud-based IT services provider that offers you transparent and worry-free service. Insist on getting the full range of services with no hidden costs, including migration, security, and management. Make it easy for your IT staff to get the support they need: look for 24/7 phone and chat support for admins, handled by certified staff. And don’t settle when it comes to the service level agreement: make sure you get “five nines” uptime. That way, you can focus on growing your business.

www.intermedia.co.uk
+44(0)203 384 2158

…………………………………………………………………………………………………………………

Intermedia will be exhibiting at the Cloud World Forum taking place on the 24th & 25th June 2015, on Stand D160.

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

Guest Blog with Kemp Little Consulting & NCC Group

The cloud is here to stay and according to a recent survey, organisations are going to be investing more in cloud services to support their core business operations.

But have companies properly considered the risks of SaaS supplier failure if the software is supporting their core processes?

The Kemp Little Consulting (KLC) team has been working with NCC Group to identify some of the risks of SaaS supplier failure and to identify the main problems that end user organisations would need to solve to effectively mitigate these risks.

In the on-premise world, the main way of mitigating against software supplier failure is Software Escrow. This was designed as a means of gaining access to source code for an application in the event of supplier failure.

If a supplier goes bust, there is no short term problem as the application and the business processes supported by the application continue to work and the corporate data remains within the control of the end user.

However, the end user company has a  problem as they will not be able to maintain the application long term and this issue is effectively solved by Software Escrow and related services such as verification.

In the cloud arena, however, the situation is different. If the supplier fails there is potentially an immediate problem of the SaaS service being switched off almost straightaway because the software supplier no longer has the cash to continue to pay for its hosting service or to pay its key staff.

For the end user, this means that they no longer have access to the application; the business process supported by the application can no longer operate and the end user organisation loses access to their data.

The business impact of this loss will vary depending upon the type of application affected:

  • Business Process Critical (e.g. finance, HR, sales and supply chain)
  • Data Critical (e.g. analytics or document collaboration)
  • Utility (e.g. web filtering, MDM, presentational or derived data)

In our research, we found that both suppliers of cloud solutions and end user organisations had not properly thought through the implications of these new risks, nor the services they would require to mitigate against the risk of supplier failure.

The primary concerns that end user customers had were around their business critical data. They were concerned by lack of access to data; loss of data; the risk of compliance breach by losing control of their data and how they might re-build their data into usable form if they could get it back. There was also concern about access to funding to keep the infrastructure running in the SaaS vendor in order to buy time to make alternative arrangements.

They were much less concerned about access to the application or getting access to the source code.

This is understandable as their primary concern would be getting their data back and porting it to another solution to get the business back up and running.

In a separate part of our study, the Kemp Little commercial team looked at the state of the market of the provisions generally found in SaaS contracts to deal with the event of supplier failure.  The team found that even if appropriate clauses were negotiated into the contract at the outset, there may be real difficulties in practically enforcing those terms in an insolvency situation.

End user organisations were more concerned than SaaS suppliers about their capability to deal with all of these problems and were amenable to procuring services from third parties to help them mitigate the risks and solve the problems they could not solve purely by contractual means.

End users were also concerned that many SaaS solutions are initially procured by “Shadow-IT” departments as part of rapid business improvement projects and deployed as pilots where the business risks of failure are low.

However, these solutions can often end up being rolled out globally quite quickly and key parts of the business become dependent upon them by stealth.

It is therefore considered important for companies to develop a deep understanding of their SaaS estate and regularly review the risks of supplier failure and put in place appropriate risk mitigation measures.

KLC recently worked with global information assurance specialist NCC Group to help it enhance the service model for its SaaS Assured service.

This article was originally posted on the Kemp Little Blog and can be found here.

…………………………………………………………………………………………………………………

John Parkinson, Global SaaS Business Leader at NCC Group will be speaking at the Cloud World Forum on 24th June 2015 at 12.45pm.

His talk will take place in Theatre D: Cloud, Data Governance & Cyber Security on ‘Outsourcing to Software as a Service? Don’t Overlook the Critical Commercial Security Risks.’

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

Guest Blog with AppDynamics

Author: Eric Smith at AppDynamics

Anyone deploying an application in production probably has some experience with scaling to meet increased demand. A generation ago, virtualization made scaling your application as simple as increasing your instance count or size. However, now with the advent of cloud, you can scale to theoretical infinity. Maybe you’ve even set up some auto-scaling based on underlying system metrics such as CPU, heap size, thread count, etc. Now the question changes from “Can I scale my environment to meet demand?” (if you add enough computing resources you probably can), to “How can I efficiently scale my infrastructure to accommodate my traffic, and if I’m lucky maybe even scale down when needed?” This is a problem I run into almost every day dealing with DevOps organizations.

If your application environment looks like this (if so, I’d love to be you):

Screen-Shot-2015-03-17-at-10.57.36-AM

You can probably work your way through to the solution, eventually. Run a bunch of load tests, find a sweet spot of machine size based on the performance under the test parameters, and bake it into your production infrastructure. Add more instances to each tier when your CPU usage gets high. Easy. What if your application looks like this?

Screen-Shot-2015-03-17-at-10.57.45-AM

What about when your application code changes? What if adding more instances no longer fixes your problem? (Those do cost money, and the bill adds up quickly…)

The complexity of the problem is that CPU bounding is only one aspect — most applications encounter a variety of bounds as they scale and they vary at each tier. CPU, memory, heap size, thread count, database connection pool, queue depth, etc. come into play from an infrastructure perspective. Ultimately, the problem breaks down to response time: how do I make each transaction as performant as possible while minimizing overhead?

The holy grail here is the ability to determine dynamically how to size my app server instances (right size), how many to create at each level (right scale) and when to create them (right time). Other factors come into play as well such as supporting infrastructure, code issues, and the database — but let’s leave that for another day.

Let me offer a simple example. This came into play recently when working with a customer analyzing their production environment. Looking at the application tier under light/normal load, it was difficult to determine what factors to scale, we ended up with this:

Screen-Shot-2015-03-17-at-10.57.54-AM

Response time actually decreases toward the beginning of the curve (possibly a caching effect?). But if you look at the application under heavier load, things get more interesting. All of a sudden you can start to see how performance is affected as demand on the application increases:

Screen-Shot-2015-03-17-at-10.58.02-AM

Looking at a period of heavy load in this specific application, hardware resources are actually still somewhat lightly utilized, even though response time starts to spike:

Screen-Shot-2015-03-17-at-10.58.12-AM
Screen-Shot-2015-03-17-at-10.58.24-AM

In this application, it appears that response time is actually more closely correlated with garbage collection than any specific hardware bound.

While there is clearly some future effort here to look at garbage collection optimization, in this case optimizing best fit actually comes down to determining desired response time, maximum load for a given instance size maintaining that response time, and cost for that instance size. In a cloud scenario, instance cost is typically fairly easy to determine. In this case, you can normalize this by calculating volume/(instance cost) at various instance sizes to determine a better sweet spot for vertical scale.

Horizontal scale will vary somewhat by environment, but this tends to be more linear — i.e. each additional instance adds incremental bandwidth to the application.

There’s still quite a bit more room for analysis of this problem, like resource cost for individual transactions, optimal response time vs. cost to achieve that response time, synchronous vs. asynchronous design trade-offs, etc. but these will vary based on the specific environment.

Using some of these performance indicators from the application itself (garbage collection, response time, connection pools, etc.) rather than infrastructure metrics, we were able to quickly and intelligently right size the cloud instances under the current application release as well as determine several areas for code optimization to help improve their overall efficiency. While the code optimization is a forward looking project, the scaling question was in response to a near term impending event that needed to be addressed. Answering the question in this way allowed us to meet both the near term impending deadline, but also remain flexible enough to accommodate any forthcoming optimizations or application changes.

Interested to see how you can scale your environment? Check out a FREE trial now!

…………………………………………………………………………………………………………………

John Rakowski, Chief Technology Strategist at AppDynamics will be speaking at the Cloud World Forum on 25th June 2015 at 2.25 pm.

His talk will take place in Theatre C: SDE & Hyperscale Computing on ‘Three Rules for the Digital Enterprise’. 

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

Kent County Council Cuts Time for New Job Postings by 75% and Eases Recruitment for 1,500 Hiring ManagersCapturel

Kent County Council provides a broad range of services to 1.4 million people living in Kent, England. The organization provides services, including social care and health, local transportation and infrastructure, schools and adult education, leisure services, and libraries.

Challenges

  • CapturelpStreamline recruitment processes throughout the council to improve efficiency, reduce costs, and provide better service to hiring managers and candidates
  • Empower managers to take control of recruitment with a user-friendly system that provides more visibility and control over the recruitment process
  • Improve the candidate experience and speed up onboarding new employees across all council services, from social care workers, though to park rangers, librarians, and civil engineers
  • Implement a flexible system that will enable continuous improvement and offer further development opportunities in line with evolving business requirements

Solutions

  • Replaced the existing applicant-tracking system with Oracle Talent Acquisition Cloud to drive more efficiency and flexibility through the council’s recruitment process for health and social care, education, leisure services, and local infrastructure employees
  • Cut average time required to create a new job posting by 75%, from four days to just one day
  • Provided flexibility to meet Kent County Council’s diverse range of resourcing activities, from high volume recruitment to specialized campaigns across a vast range of occupations, from social workers through to country park wardens
  • Enabled candidates to track applications on the portal, eliminating the need to use telephone or e-mail communications to check progress, improving the candidate experience, and freeing up council recruitment resources
  • Improved data quality with built-in data validation and check points, saving considerable time and improving accuracy
  • Reduced the number of data fields by one-third and captured information once via online forms, improving data security and reducing manual work-loads and associated costs
  • Enabled manager self-service, speeding the overall recruitment process by enabling more than 1,500 hiring managers to move candidates through stages of recruitment, removing any potential bottlenecks, such as requiring 10 days to arrange an interview
  • Speeded responses to candidates, either by sending out job offers to successful candidates within 24 hours or generating regret letters to unsuccessful candidates immediately, removing previous delays and bottlenecks
  • Speeded onboarding process with automatic requests for an IT account, and by automatically sending an eligibility to work checklist to managers or candidates where applicable, to ensure the organization completes all tasks before the new council employee’s start-date
  • Achieved a streamlined process for onboarding employees, with further efficiencies expected in the future with automatic transfer of employee details from Oracle Talent Acquisition Cloud directly into human resources and payroll systems within Oracle E-Business Suite, eliminating manual processes, speeding time to transfer data, and improving data accuracy
  • Worked with Oracle Platinum Partner Evosys to deliver a smooth implementation and knowledge transfer to enable the council to configure the system in-house as business requirements evolve

Why Oracle

Kent County Council undertook a thorough competitive tender, as required by European law. The council weighed each option according to cost and functionality and narrowed its choices down to three. A point crucial to the selection was that Kent County Council wanted to be able to make changes to the system without bringing in external consultants. Oracle Talent Acquisition Cloud offered this flexibility.

“Oracle Talent Acquisition Cloud outshone all the others in terms of functionality and potential for development,” said Susan Goymer, recruitment manager, Kent County Council.

“Oracle offers incredible support through its product interest groups and the Oracle Knowledge Exchange, which enables us to share experience with other Oracle customers. We also have fantastic support from the Oracle team, which took the time to understand our diverse recruitment requirements,” Susan Goymer said.

Partner

CapturelllA dedicated project team from Kent County Council worked with Oracle and Oracle Platinum Partner Evosys to implement Oracle Human Capital Management Cloud. Evosys provided onsite consultancy together with offshore support resulting in almost 24 hour coverage.

The team pushed Oracle Talent Acquisition Cloud live very smoothly and achieved a clear cut-off date to migrate from the old system onto Oracle.

Evosys worked closely with Kent County Council throughout the project, sharing extensive knowledge with the team, providing training, and equipping the council with the resources and expertise to configure the system itself after implementation.

“Evosys spent considerable time with our recruitment team, teaching them how to make changes to Oracle Talent Acquisition Cloud as required. As a result, we now have the expertise and knowledge in-house to configure the system in line with changing business requirements,” Susan Goymer said.

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

Guest Blog with eFolder

“But it’s in the cloud, isn’t it backed up already?”

Author: Trace Ronning, Content Marketing Manager, eFolder

In 2015, businesses have continued their rapid adoption of cloud/SaaS applications with no signs of slowing down. A study completed by the Aberdeen Group concluded that 80% of businesses use at least one cloud application. Usage has also increased. In 2014, 51% of IT workloads took place in the cloud, marking it the first year that the cloud owned a majority of IT workloads according to Silicon Angle.

The advantages of the cloud are clear, with most companies experiencing greater employee productivity, mobility, and improved collaboration as a result of adopting cloud applications.

There is, however, one major issue that the cloud has not eliminated for organizations: data loss. While the inherent securities of SaaS services, such as Office 365, Google Apps, Salesforce, and Box are minimizing outages and random data loss, human error is still the primary source of lost data. In 2013, 32% of companies using cloud services reported losing cloud data, an overwhelming majority of which came as a direct result of human intervention.

How exactly are businesses losing this cloud data, and how can they prevent it from happening again? Let’s take a dive into the top five sources of cloud data loss and find out.

1. User Error

We know that humans are not perfect. Checking in as the top reason for cloud data loss is user error, which accounts for 64% of all cloud data loss. The two primary examples of user error include accidental deletion or accidentally overwriting a file. We all make mistakes now and again, so it is ill-advised to operate under the assumption that by adopting cloud applications, people will become immune to the human condition and never lose a file again.

2. Hackers

Hackers, defined as outsiders who get into the system with nefarious intent, are responsible for 13% of all cloud data loss. As cloud adoption and usage has grown, so has a hacker’s willingness to attack companies of all sizes, not just giant enterprise businesses, such as Sony or Home Depot. As of now, 50% of data breaches occur at companies with fewer than 1,000 employees, with the most common type of attacks consisting of a hacker breaking into an organization’s instance or acquiring administrator credentials. Malicious activity such as this often results in sensitive data being compromised, jeopardizing the customers of the company, as well as its ability to keep its doors open and continue doing business.

3. Closing an account

At 10%, the third most common kind of cloud data loss occurs when a business closes an account. We define this action as a user de-provisioning a user within a cloud application or discontinuing the service. Without deploying a backup service to save former users’ data or a solution that helps migrate data from one application to another, respectively, organizations run the risk of losing data in transition phases.

4. Malicious Delete

Think your business is immune to frustrated employees going rogue? Think again. 7% of all cloud data loss occurs when an employee intentionally deletes files or folders. This type of deletion is often initiated by an unhappy employee or a recently terminated employee who has retained access to organizational cloud applications and data. At all levels of a business there are examples of employees who don’t value company data as much as IT managers or executives do, especially in roles with high-turnover.

5. Third-Party Software

The fifth most common reason for cloud data loss is the unexpected result of using a third-party software on one of your SaaS applications. Occasionally, a data overwrite or deletion will occur when running third-party software. A classic example is a Salesforce administrator running Demand Tools and inaccurately identifying a prospect as a duplicate account and permanently deleting that prospect’s record. Third-party software is generally used to make daily use of the most common business applications easier, but sometimes the side-effects include the loss of important data.

How

You may be reading this blog post and thinking, “But if my data is in the cloud, can’t I just easily recover it if a file is deleted or overwritten? Why should I be concerned with cloud data backup?”

There is a common misconception that data is retained in the cloud forever, but that is simply not the case. Most cloud applications do keep some type of “recycling bin,” but this bin often has a storage limit, automatic purge function, or can be manually cleared.

Automated, off-site backup to a second cloud location is the most reliable way to ensure that the sensitive data you store in the cloud is recovered, regardless of which cloud data disaster hits your organization. By employing a solution that allows for full-text search across multiple cloud applications, direct, point-in-time data restores into the cloud application of choice, and a military-grade off-site backup location, your organization can both protect data, and empower IT admins to better use that data on a daily basis.

Don’t let cloud data loss become the problem you didn’t know you had. Make it the problem you know you that you’ll never have with cloud-to-cloud backup.

eFolder

Bryan Forrimageedit_2_7919550340ester, Senior VP of Sales at eFolder will be speaking on the 25th June at 12.35pm in Theatre D at the Cloud World Forum about the Top 5 Sources of Cloud Data Loss & How to Protect Your Organisation.

Don’t miss the chance to take advantage of all the knowledge and networking opportunities presented by EMEA’s only content-led Cloud exhibition.

 

REGISTER FOR YOUR FREE EXHIBITION PASS HERE!

CWF static banner

Tag Cloud

%d bloggers like this: