Posts tagged ‘guest blog’

Denial ain’t just a river in Egypt…#CloudWF

Guest Blog with NCC Group

Author: John Parkinson, NCC Group

During the Cloud World Forum event in London on 24 July, we discussed the opportunities for Software as a Service businesses to become more successful. Focussing on the neglected issue of commercial security, we asked how the SaaS market can provide answers to potential supply failure in the market.  By anticipating, understanding and addressing the risks for customers who rely on outsourced application services, we argued that providers can contribute more to enhancing trust and confidence in the Software as a Service market.

How are SaaS businesses reacting to the issue?  In our experience, there are three broadly different attitudes:

  1. It was Mark Twain who perceptively wrote that ‘Denial ain’t just a river in Egypt’. The Risk Deniers perform according to type in asserting that it just won’t happen. ‘I haven’t failed yet and have no plans to do so’. Said with conviction it is likely that they have convinced themselves. As Isaac Asimov once wrote, they cling to the view that the easiest way to solve a problem is to deny it exists
  2. The largest group, the Agnostics, take a more considered view. They concede the possibility and see the wisdom of having a plan, but only if someone raises the question.  Whether hoping against hope, firmly in the wait and see camp or just too busy with other stuff, they generally accord with the opinion elucidated by TS Eliot that humankind cannot bear too much reality.
  3. Last but by no means least are the Innovators. They align instinctively to the perspective of Peter Drucker that innovation is the specific instrument of entrepreneurship. Salmon Software is one good example of a business that recognises this. John Byrne, the Salmon MD says ‘we understand the needs of our customers and the potential impacts of them not having access to the application’. Similarly Wazuko MD, Simon Hill asserts that the objective is ‘to show our existing customers and prospects that stepping into the cloud with Wazuko is simple and secure.’ Operating in a highly regulated sector of finance is Banking system provider, Mambu. MD Eugene Danilkis in a blog article commented: ‘Regulators have rightly recognised the critical role that technology providers play to support key business processes.  In turn, technology providers need to ensure consistent and reliable delivery of these services that financial institutions depend on to reinforce trust and extend the potential for future innovation and growth.’

As a SaaS Provider, which category do you fall into – a Denier, an Agnostic or an Innovator And which type of business would you trust when outsourcing your software services?

Original NCC Group blog here

———————————————————————————————————-

NCC Group were a Visionary Sponsor at the Cloud World Forum 2015, which took place on the 24th – 25th June.

The Cloud & DevOps World Forum delivers speed and continuous delivery to Europe’s Digital Enterprises, and will take place on the 21st – 22nd June 2016, at Olympia in London.

Register your interest for 2016 here

Advertisements

Google and Why the New Standard for Modern Applications is a Non-Relational Database Deployed in the Cloud #CloudWF

Guest Blog with MongoDB

Google and Why the New Standard for Modern Applications is a Non-Relational Database Deployed in the Cloud

Author: Kelly Stirman, VP of strategy at MongoDB

It’s positively raining cloud stories. Sorry. Cloud puns are so over…cast. Regardless, recent months have seen some interesting developments in the high stakes game for control of the foundational layer of your application stack i.e. what database you use and where it’s deployed. In early May Google released Cloud BigTable as a managed NoSQL database. Two weeks later Gartner released its  Magic Quadrant for Cloud Infrastructure as a Service (Cloud IaaS) report.

While unrelated, the two announcements both shine some light on our path to a new, cloud-rich future. While the aspiring cloud giant Google gave further validation, if it were needed, that  NoSQL databases deployed in the cloud are the new standard for modern applications.

You see, the workload from modern applications is quite different from what it’s been in the past. Building your own data centre and installing a relational database was fine when you could predict the size, speed and type of data. Applications in 2015 are a different breed. The growth of social, mobile and sensor data has dramatically altered the way we approach development. Developers can’t tell in advance what any of this will look like in the final production version of their application, let alone future iterations.

Many organisations are already overcoming this by deploying non-relational databases on commodity hardware in the cloud. This approach lets companies gear up for massive scale and gives them enough flexibility to incorporate new data types that will support business processes and provide operational insight.

Google’s BigTable and Gartner’s Magic Quadrant

Google’s announcement highlighted two things. One: the big infrastructure players are looking to diversify and find new ways to wring revenue from the big data stack. Two: BigTable’s release illustrated that all major data innovation is happening away from relational data models.  Relational databases aren’t going anywhere fast, but they are challenged by the requirements of modern applications. In particular the trickiest of the three Vs of big data – variety of data types. BigTable is yet another database-as-a-service offering that is designed to be deployed on the vendor’s own cloud infrastructure, see also Amazon and Microsoft.

From one cloud provider’s announcement, we now look at a broader view of the industry from Gartner. This is from the introduction to the Magic Quadrant for Cloud Infrastructure as a Service, Worldwide report[1]:

The market for cloud IaaS is in a state of upheaval, as many service providers are shifting their strategies after failing to gain enough market traction. Customers must exercise caution when choosing providers.

The report went on to explain that ‘all the providers evaluated are believed to be financially stable, with business plans that are adequately funded. However, many of the providers are undergoing significant re-evaluation of their cloud IaaS businesses’. In other words, some vendors may not be in it for the long haul.

What it means for you

As well as diversification into database services, the cloud competition is also sparking a healthy price war. Just a few days after the Magic Quadrant was released Google announced it was slashing prices by as much as 30%. Microsoft and Amazon are also fond of aggressive pricing as they try to eat as much market share as possible.

Which brings us back to Google’s launch of NoSQL database-as-a-service BigTable. The release came on the back of Microsoft’s recent Azure DocumentDB announcement and, of course Amazon’s own DynamoDB offering. As the competition for cloud infrastructure drives margins down, the big players are looking up the stack to drive revenue and it’s clear NoSQL technology is one of the most attractive areas.

Though it’s worth pointing out that these as-a-service database offerings generally come with a very narrow set of features. For example Cloud BigTable is a wide column store with a simple key-value query model. Like some other NoSQL databases, it is limited by:

  • A complex data model which presents a steep learning curve to developers, slowing the rate of new application development
  • Lack of features such as an expressive query language (key-value only), integrated text search, native secondary indexes, aggregations and more. Collectively, these enable organisations to build more functional applications faster

Ultimately the cloud providers can relieve users of some of the overhead of running a database but they still will have to deal with the complexity of mastering data models and working around key-value query limitations.

Out of the chaos it’s becoming clear that a non-relational database hosted in the cloud, is going to be the predominant way modern companies deploy applications. Each customer will have varying demands of control. Some will want everything ‘as-a-service’, others will want full control over how and where their database runs and security on each layer of the stack. In the modern world of cloud-ready, non-relational databases, you have more choice than ever. That choice can also bring a risk of vendor lock-in, if you select an offering that is tied to one specific platform, no matter how ‘web-scale’ that platform claims to be.

—–

[1] Gartner, Magic Quadrant for Cloud Infrastructure as a Service, Worldwide, Lydia Leong et al, May 18, 2015

…………………………………………………………………………………………………………………

MongoDB will be exhibiting at the Cloud World Forum taking place on the 24th & 25th June 2015.

Kelly Stirman is Vice President of Strategy at MongoDB, speaking at the Cloud World Forum on the 25th June at 10.35 in Theatre A: Keynote – Building Business in the Cloud on ‘Escaping Cloud Cuckoo Land: 5 Tips for Making Success a Reality in the Cloud.’

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

The state of cloud computing in Europe #CloudWF

Guest Blog with IBM

Author: Simon Porter

Cloud computing is the most touted technology in the global business landscape today. Europe is no exception.

There are two main ways we’re seeing businesses take advantage of the cloud in Europe. First, there are the smaller, innovative, and born-on-the-cloud startup companies that use it to help them disrupt existing industries by getting to market faster and with less upfront capital investment.

The second area where we’re seeing European companies take advantage of cloud is at more established enterprises looking to enter new, international markets. As companies here seek to become more global, they’re looking toward non-European markets—whether by selling into those markets or tapping into suppliers. In these cases, cloud empowers them to enter these new markets by providing the flexibility, speed and scalability needed to be a global player.

Cloud also enables businesses to market and sell to customers in new and more efficient ways. With the proliferation of smartphones and social media, business success relies on turning this technology into new sales channels. This is often referred to as systems of engagement, and with unpredictable volumes, it’s ideally suited to cloud.

The economic climate in Europe is improving, but it remains very competitive. It is critical for businesses to optimize their supply chains and lower their sales and support costs. Applying sophisticated analytics is one effective way of doing this. In the past, this was prohibitively expensive. But cloud enables analytics-as-a-service, removing the need and cost for a large up-front investment in an IT system that may be used only a few hours per month.

Challenges in cloud adoption persist

According to a Eurostat study released this past year, only 19 percent of European businesses used cloud computing services in 2014. Compare that to a recent RightScale study that reports 82 percent of U.S. enterprises as having a hybrid cloud strategy (up from 74 percent in 2014), and it would appear that Europe is lagging. However, that’s only part of the story.

You can expect the European cloud adoption numbers to rise sharply this year and even more in years to come. But as with any emerging technology, there remains barriers to adoption.

Chief among those barriers is security.

According to a recent Cloud Industry Forum poll, 70 percent of U.K. executives cited data security among their biggest concerns in moving to cloud. That marks an 11 percent year-over-year increase.

What IT departments in Europe are seeing is something quite different than what the rest of the world is experiencing, and that stems from data location and security. A lot of the questions around security and data location are driven by perceptions in the market that aren’t always true. Security in a cloud-based solution will often be much stronger than that of an on-premises, in-house IT solution.

To remain competitive, European businesses must work through security challenges—and I fully believe that they will. It’s ultimately not a matter of technical or legal challenges preventing cloud adoption in Europe—it’s about business leaders understanding the transformational benefits cloud can bring to their business, and then typically for midsize businesses taking advantage of this by using a local trusted Cloud Service Provider.

The good news  is that IBM is continuing to open data centers in Europe. We now have centers in the U.K., Netherlands, Germany, France, and most recently announced, Italy. But even with this span of locations, customers want to keep their data in country.

European SMBs typically lack resources and the IT skills to take advantage of this new kind of capability. They need to turn to a local service provider that can essentially be their IT department. At IBM, we’re continuing to expand our partnerships with local cloud service providers as a means of enabling local data and secure environments with IBM’s Managed Service Providers.

A move to hybrid 

In the business world, we recognize clients have already made investments in core IT systems. We find that European customers want to protect and enhance them with new, innovative capabilities that enable them to make better business decisions faster with advanced analytics. Companies are also able to reach new customers and markets with multi-channel marketing and sales capabilities, both largely based on cloud-enabled digital and social technologies.

For example, a client may have an existing enterprise resource planning (ERP) system that they have invested a lot of time and money in over the years. They still need to see a return on that investment. It is impractical to completely replace it with a new solution, but perhaps enhancing it with social analytics or social engagement could help them in their customer service and marketing.

Combining mission-critical, on-premises systems with new cloud-based systems of engagement is an example of a common hybrid cloud solution. This is how many businesses in Europe protect their existing investments in IT while taking advantage of new delivery models.

An eye toward the future 

The world is only getting flatter. There are multiple new entrants in many industries, and existing businesses will have to differentiate their own offerings to remain competitive. Who would have thought the taxi industry could be disrupted in the way that Uber has done? Cloud can be the key enabler for businesses to innovate around new products and channels faster and in a lower risk manner.

…………………………………………………………………………………………………………………

IBM will be at the Cloud World Forum on Stand D150, taking place on the 24th – 25th June 2015.

Tony Morgan, Client Chief Innovation Officer GTS Europe at IBM will be speaking on Day 1 at 11:05am in Theatre C: DevOps & Containerisation on ‘Speaker out of the Shadows: Managing Innovation with Cloud.’ 

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

Monetizing the Internet of Things: Will All These Connected Devices Pay Off? #CloudWF

Guest Blog with Avangate

Author: Michael Ni, CMO/SVP, Marketing and Products, Avangate

Sometimes it seems like just yesterday that everything was getting “cloud-ified,” from photo sharing to customer relationship management, but the move to the cloud is actually a couple of years old these days. But now that we all have our documents stored in the cloud (and our heads out of the clouds), everybody’s looking for a clear path toward success in the latest trend: the Internet of Things.

Just like the cloud before it, the Internet of Things is now top of mind for software professionals. Its promise has been nascent for a long time: although Dick Tracy’s 2-Way Wrist Radio first appeared in 1946, connected devices like the FitBit and Apple Watch are just starting to get in the hands – or on the wrists – of everyday folks.

With broader adoption of connected devices come both opportunities and challenges. Even the companies that are able to sell IoT hardware successfully find themselves needing to develop and monetize complementary services to help users get the most out of their devices. And software-focused companies that don’t have devices need new a way to get in on the IoT and the billions it’s expected to bring in. That way is through data.

While the IoT started out with connected sensors, it soon became clear that simply sensing data wouldn’t be enough. Just like storing content in the cloud also required building interfaces that made it easy for users to access cloud content, IoT sensors now need to produce data that’s easy for people to find, understand and use. And because IoT data is so valuable (not to mention expensive), there needs to be a way for companies to monetize it. So if wave 1 of the IoT trend involved simply creating the sensors, wave 2 involves monetizing them and the data they create.

As a result, more and more software vendors have started staking a claim in the IoT. At Avangate, we’ve been helping companies like Bitdefender monetize their IoT offerings. Bitdefender offers a “security of things” solution called BOX, a small device that scans for IoT threats on a local WiFi connection. By monitoring the way your smart devices stay connected, BOX finds and protects against possible threats to your connected information. By helping Bitdefender easily monetize its entry into the IoT, including not only the device itself but also associated data, we’re showing the importance and ease of monetizing IoT devices and the data they produce.

And that’s the key: commerce absolutely has to run in the background of every IoT play. No matter how affordable a device is up front, or if streams of data are free for now, devices and data both cost a significant amount to create, maintain, and provide in ways that really work for consumer and business customers. As a result, to truly succeed in the IoT, software companies need to be able to package and sell data derived from connected devices in ways that will benefit other entities as well.

In the end, it’s clear that that the desperate need for IoT data monetization is actually a massive opportunity. Companies are still scrambling to create devices and support data, and not enough entities are thinking about how to monetize it. Those who find themselves able to successfully package and sell information in the IoT era may find themselves enjoying Salesforce style status and riding high on the wave of the future as the IoT truly takes off.

…………………………………………………………………………………………………………………

Avangate will be exhibiting at the Cloud World Forum on Stand D48, taking place on the 24th – 25th June 2015.

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

Scaling Your Application Efficiently – Horizontal or Vertical? #CloudWF

Guest Blog with AppDynamics

Author: Eric Smith at AppDynamics

Anyone deploying an application in production probably has some experience with scaling to meet increased demand. A generation ago, virtualization made scaling your application as simple as increasing your instance count or size. However, now with the advent of cloud, you can scale to theoretical infinity. Maybe you’ve even set up some auto-scaling based on underlying system metrics such as CPU, heap size, thread count, etc. Now the question changes from “Can I scale my environment to meet demand?” (if you add enough computing resources you probably can), to “How can I efficiently scale my infrastructure to accommodate my traffic, and if I’m lucky maybe even scale down when needed?” This is a problem I run into almost every day dealing with DevOps organizations.

If your application environment looks like this (if so, I’d love to be you):

Screen-Shot-2015-03-17-at-10.57.36-AM

You can probably work your way through to the solution, eventually. Run a bunch of load tests, find a sweet spot of machine size based on the performance under the test parameters, and bake it into your production infrastructure. Add more instances to each tier when your CPU usage gets high. Easy. What if your application looks like this?

Screen-Shot-2015-03-17-at-10.57.45-AM

What about when your application code changes? What if adding more instances no longer fixes your problem? (Those do cost money, and the bill adds up quickly…)

The complexity of the problem is that CPU bounding is only one aspect — most applications encounter a variety of bounds as they scale and they vary at each tier. CPU, memory, heap size, thread count, database connection pool, queue depth, etc. come into play from an infrastructure perspective. Ultimately, the problem breaks down to response time: how do I make each transaction as performant as possible while minimizing overhead?

The holy grail here is the ability to determine dynamically how to size my app server instances (right size), how many to create at each level (right scale) and when to create them (right time). Other factors come into play as well such as supporting infrastructure, code issues, and the database — but let’s leave that for another day.

Let me offer a simple example. This came into play recently when working with a customer analyzing their production environment. Looking at the application tier under light/normal load, it was difficult to determine what factors to scale, we ended up with this:

Screen-Shot-2015-03-17-at-10.57.54-AM

Response time actually decreases toward the beginning of the curve (possibly a caching effect?). But if you look at the application under heavier load, things get more interesting. All of a sudden you can start to see how performance is affected as demand on the application increases:

Screen-Shot-2015-03-17-at-10.58.02-AM

Looking at a period of heavy load in this specific application, hardware resources are actually still somewhat lightly utilized, even though response time starts to spike:

Screen-Shot-2015-03-17-at-10.58.12-AM
Screen-Shot-2015-03-17-at-10.58.24-AM

In this application, it appears that response time is actually more closely correlated with garbage collection than any specific hardware bound.

While there is clearly some future effort here to look at garbage collection optimization, in this case optimizing best fit actually comes down to determining desired response time, maximum load for a given instance size maintaining that response time, and cost for that instance size. In a cloud scenario, instance cost is typically fairly easy to determine. In this case, you can normalize this by calculating volume/(instance cost) at various instance sizes to determine a better sweet spot for vertical scale.

Horizontal scale will vary somewhat by environment, but this tends to be more linear — i.e. each additional instance adds incremental bandwidth to the application.

There’s still quite a bit more room for analysis of this problem, like resource cost for individual transactions, optimal response time vs. cost to achieve that response time, synchronous vs. asynchronous design trade-offs, etc. but these will vary based on the specific environment.

Using some of these performance indicators from the application itself (garbage collection, response time, connection pools, etc.) rather than infrastructure metrics, we were able to quickly and intelligently right size the cloud instances under the current application release as well as determine several areas for code optimization to help improve their overall efficiency. While the code optimization is a forward looking project, the scaling question was in response to a near term impending event that needed to be addressed. Answering the question in this way allowed us to meet both the near term impending deadline, but also remain flexible enough to accommodate any forthcoming optimizations or application changes.

Interested to see how you can scale your environment? Check out a FREE trial now!

…………………………………………………………………………………………………………………

John Rakowski, Chief Technology Strategist at AppDynamics will be speaking at the Cloud World Forum on 25th June 2015 at 2.25 pm.

His talk will take place in Theatre C: SDE & Hyperscale Computing on ‘Three Rules for the Digital Enterprise’. 

REGISTER YOUR FREE EXHIBITION PASS HERE.

CWF static banner

Top 5 Sources of Cloud Data Loss #CloudWF

Guest Blog with eFolder

“But it’s in the cloud, isn’t it backed up already?”

Author: Trace Ronning, Content Marketing Manager, eFolder

In 2015, businesses have continued their rapid adoption of cloud/SaaS applications with no signs of slowing down. A study completed by the Aberdeen Group concluded that 80% of businesses use at least one cloud application. Usage has also increased. In 2014, 51% of IT workloads took place in the cloud, marking it the first year that the cloud owned a majority of IT workloads according to Silicon Angle.

The advantages of the cloud are clear, with most companies experiencing greater employee productivity, mobility, and improved collaboration as a result of adopting cloud applications.

There is, however, one major issue that the cloud has not eliminated for organizations: data loss. While the inherent securities of SaaS services, such as Office 365, Google Apps, Salesforce, and Box are minimizing outages and random data loss, human error is still the primary source of lost data. In 2013, 32% of companies using cloud services reported losing cloud data, an overwhelming majority of which came as a direct result of human intervention.

How exactly are businesses losing this cloud data, and how can they prevent it from happening again? Let’s take a dive into the top five sources of cloud data loss and find out.

1. User Error

We know that humans are not perfect. Checking in as the top reason for cloud data loss is user error, which accounts for 64% of all cloud data loss. The two primary examples of user error include accidental deletion or accidentally overwriting a file. We all make mistakes now and again, so it is ill-advised to operate under the assumption that by adopting cloud applications, people will become immune to the human condition and never lose a file again.

2. Hackers

Hackers, defined as outsiders who get into the system with nefarious intent, are responsible for 13% of all cloud data loss. As cloud adoption and usage has grown, so has a hacker’s willingness to attack companies of all sizes, not just giant enterprise businesses, such as Sony or Home Depot. As of now, 50% of data breaches occur at companies with fewer than 1,000 employees, with the most common type of attacks consisting of a hacker breaking into an organization’s instance or acquiring administrator credentials. Malicious activity such as this often results in sensitive data being compromised, jeopardizing the customers of the company, as well as its ability to keep its doors open and continue doing business.

3. Closing an account

At 10%, the third most common kind of cloud data loss occurs when a business closes an account. We define this action as a user de-provisioning a user within a cloud application or discontinuing the service. Without deploying a backup service to save former users’ data or a solution that helps migrate data from one application to another, respectively, organizations run the risk of losing data in transition phases.

4. Malicious Delete

Think your business is immune to frustrated employees going rogue? Think again. 7% of all cloud data loss occurs when an employee intentionally deletes files or folders. This type of deletion is often initiated by an unhappy employee or a recently terminated employee who has retained access to organizational cloud applications and data. At all levels of a business there are examples of employees who don’t value company data as much as IT managers or executives do, especially in roles with high-turnover.

5. Third-Party Software

The fifth most common reason for cloud data loss is the unexpected result of using a third-party software on one of your SaaS applications. Occasionally, a data overwrite or deletion will occur when running third-party software. A classic example is a Salesforce administrator running Demand Tools and inaccurately identifying a prospect as a duplicate account and permanently deleting that prospect’s record. Third-party software is generally used to make daily use of the most common business applications easier, but sometimes the side-effects include the loss of important data.

How

You may be reading this blog post and thinking, “But if my data is in the cloud, can’t I just easily recover it if a file is deleted or overwritten? Why should I be concerned with cloud data backup?”

There is a common misconception that data is retained in the cloud forever, but that is simply not the case. Most cloud applications do keep some type of “recycling bin,” but this bin often has a storage limit, automatic purge function, or can be manually cleared.

Automated, off-site backup to a second cloud location is the most reliable way to ensure that the sensitive data you store in the cloud is recovered, regardless of which cloud data disaster hits your organization. By employing a solution that allows for full-text search across multiple cloud applications, direct, point-in-time data restores into the cloud application of choice, and a military-grade off-site backup location, your organization can both protect data, and empower IT admins to better use that data on a daily basis.

Don’t let cloud data loss become the problem you didn’t know you had. Make it the problem you know you that you’ll never have with cloud-to-cloud backup.

eFolder

Bryan Forrimageedit_2_7919550340ester, Senior VP of Sales at eFolder will be speaking on the 25th June at 12.35pm in Theatre D at the Cloud World Forum about the Top 5 Sources of Cloud Data Loss & How to Protect Your Organisation.

Don’t miss the chance to take advantage of all the knowledge and networking opportunities presented by EMEA’s only content-led Cloud exhibition.

 

REGISTER FOR YOUR FREE EXHIBITION PASS HERE!

CWF static banner

Connecting to the Future of the Internet of Things with Cassandra #CloudWF

Guest blog with DataStax

Connecting to the Future of the Internet of Things with Cassandra

Author: Seema Haji

Screen-Shot-2014-10-16-at-4.23.04-PM-250x301Millions of people, objects and ‘things’ connecting with each other is changing the way organizations and consumers interact with each other and the environment around them. Data comes from different geographical locations and across multiple channels.

Sensors on vehicles collect information on mileage, pressure, temperature, and even driving patterns and communicate it back to improve transportation efficiency and safety. Retailers are leveraging illumination, temperature and humidity sensors to gather data and make real-time adjustment on energy consumption to not only lower operational costs but also make our planet a better place. Healthcare solutions utilize these sensors to monitor and analyze patient and diagnostic data, saving lives with real-time transactional analytics. High velocity of massive amount of continuous data coming from wearable and communicable sensors immerse into the database system, challenging every bit of disconnectivity from what a modern Internet-of-Things application requires in database technology.

According to a survey from EMA (Enterprise Management Associates) Research with 259 business executives, analysts and IT managers, the needs of the business are aligned with IT drivers, but very disconnected with legacy infrastructures. Lines of business managers want faster query response times, competitive advantage via flexible solutions and operational efficiencies whereas legacy platforms have issues scaling to meet these challenges. Particularly for an IOT infrastructure, choosing the right data model is the key to success.

According to the survey, the data model must accommodate high-velocity sensor data and other considerations. Think of it this way: the hundreds of sensors and actuators generating massive volumes of unchangeable time-series data, only do so once. But the volume of data generated is vast; think Petabytes of information. To assimilate and analyze this information, database read / write performance is critical, particularly with high-velocity sensor data. Your database must support high-speed read and writes, and be continuously available (100% of the time) to gather this data at uniform intervals. In addition, you must plan for data scalability to maintain a cost-effective horizontal data store over time.pngbase64ee88add9c43ac79-250x271

Over time, we’ve seen plenty of IOT providers succeeding in related industries with Apache Cassandra and DataStax Enterprise, the most scalable distributed database technology, providing 24×7 uptime and blazing read/write performance for IOT solutions.

Pressure management solution provider i2O Water is able to save millions liters of water every single day; Riptide IO helps retailers save millions of dollars on energy consumption with its smart building and equipment assets management technology; Amara Health provides real-time predictive analytics to support clinicians in the early detection of critical disease states.

Check out our upcoming Webinar on the 20th May to discover how i2O addresses the water crisis with an Internet of Things solution built on Apache Cassandra™ and learn what your IOT solution can achieve with Apache Cassandra™.

DataStax

DataStax is our IoT Big Data & Analytics Theatre Sponsor at Cloud World Forum, taking place on the 24th – 25th June 2015 at Olympia Grand in London.

Johnny Miller, Solutions Architect at DataStax EMEA will be speaking on the 24th June at 12.10pm at the Cloud World Forum about Scaleable, Available and Secure data for the Internet of Things with DataStax Enterprise.

Don’t miss the chance to take advantage of all the knowledge and networking opportunities presented by EMEA’s only content-led Cloud exhibition.

Register for your FREE exhibition pass here!

CWF static banner

Tag Cloud

%d bloggers like this: