Accelerate Your Way to a Modern Cloud Analytics Platform

Many times, I see clients who have no current cloud footprint use the power of analytics to pull the rest of the organization toward the cloud. These new adopters leverage many powerful tools without making a massive, upfront investment. This can seriously accelerate their analytics abilities and their cloud maturity.

The depth of maturity on the curve determines the richness of your cloud analytics capabilities.


The Cloud and analytics maturity curve explained

The graph shown above illustrates a typical analytics maturity curve. As you can see, an analytics department that’s on the lower end of the maturity curve is typically relying upon “descriptive” analytics that can provide the business with information about the past. An analytics department that’s on the mid-range of the maturity curve can provide a  more “predictive” (insights-driven) view of the enterprise that offers a glimpse into what the future may look like for the business. An analytics department that’s on the mature end of the curve, however, is positioned to provide “prescriptive” analytics that offer deeper, actionable insights that enable a business to operate with greater agility and plan for the future with a higher degree of accuracy.

You can take the same concept of the analytics maturity curve and apply it directly to the cloud. For example, an enterprise with an immature cloud analytics platform can only source and provide descriptive analytics. An enterprise working with a cloud analytics platform in the mid-range of maturity, on the other hand, has typically migrated a critical mass of data but is only just starting to dive into all of the analytical capabilities the cloud can provide. The enterprise with a high degree of maturity on a cloud platform, however, is typically operating with proven storage and processing capabilities and is now starting to look towards cloud-native tools (e.g. Amazon Web Services Sagemaker, Azure Machine Learning, or Google’s Cloud AI) that can significantly accelerate an analytics platform. It’s only when a platform is mature enough that an organization can begin to fully leverage groundbreaking analytics concepts, such as Machine Learning (ML) and artificial intelligence (AI), to which every enterprise aspires.

Accelerate your journey along the maturity curve

For many organizations, the journey along both the analytics and cloud maturity curve can seem incredibly daunting, expensive and unfeasible. When a business finally reaches a point where they must start along the journey (or risk falling behind the competition) they generally experience trouble getting past the initial hurdle, which is resistance to change. Or, the business doesn’t want to provide any resources to help launch journey. It then becomes purely an a IT initiative, which often lacks the support to match the velocity needed.

So, how can an organization accelerate themselves along the cloud maturity curve and get to advanced analytics faster? By leveraging strong, cloud-native tools like Snowflake. What is essential, however, is a plan.

Too many times, a cloud analytics platform falls down because there is no concrete plan to constantly innovate. I’ve seen enterprises invest in a single solution, such as an on-premises data warehouse that’s been shifted to the cloud, because they think it will solve all their problems. They soon realize, however, that it’s not that simple. The problem with a “cloudified” version of a data warehouse solution is that it flies directly in the face of what a modern data architecture is. In fact, one of the key components of a modern data architecture is that it’s decoupled. That means you should be able to replace any one piece of your architecture, such as your ETL solution, with the newest, best-of-breed tool and do so with minimal impact to the rest of your architecture.

This is where a data warehouse built for the cloud like Snowflake becomes such a great accelerator. Snowflake’s multi-cluster, shared data architecture separates storage and compute, making it possible to scale up and down on-the-fly, without downtime or disruption. By providing a central storage repository that’s separate from your computing resources, only then can you move quickly through to the higher levels of the maturity curve. When you’re moving towards the prescriptive and predictive phases, Snowflake provides a foundation that allows you to build out a truly modern data architecture, leveraging all of the benefits of the cloud with tremendous benefits to the enterprise.

No matter where you are on the cloud maturity curve, data storage and warehousing will always be a central component of your architecture. By leveraging cloud-based solutions like Snowflake, you remove much of the developmental overhead that you would have with a solution that had originally been built to be deployed on-premises. And, by working with implementation partners like Slalom, you can get the most out of your cloud data warehouse, accelerate your journey to the top of the cloud maturity curve, then begin using truly revolutionary analytics, such as ML and AI, to which every enterprise aims.

Author Bio: James Anderson is a Solution Architect at Slalom, a Snowflake Implementation Partner headquartered in Seattle. James specializes in analytical data platforms built in the cloud, helping to enable enterprises get the most out of their data, and unlocking new and exciting technologies for them. James is based out of Slalom Boston, and is a frequent contributor to the Slalom technology blog.

Let History Halve Your Next Data Analytics Purchase

If you’re lucky, you’ll spend just six to 12 months considering and buying your next enterprise software solution. If it’s a seven-figure purchase, plan for an additional six to 12 months to confirm your organization has made the best investment possible.

During that process, dozens of your IT and business leaders will engage your shortlist of on-premises and software-as-a-service (SaaS) vendors to compare these competing technologies based on architecture, features, performance, business benefits and cost of ownership. Buying a data warehouse is no different.

But what if you could shorten that process? What if you had fact-based information arranged in a non-traditional but highly effective method to help you confidently narrow your search and quicken your time to market (TTM) with your chosen solution? What would that be worth to your organization and your peace of mind?

All enterprise software decisions include alternatives that span the decades. Meaning, your organization is likely to consider upgrading an existing technology you already own, and consider solutions that represent today’s latest SaaS offerings.

How so? One of your alternatives might be to upgrade your existing, on-premises data warehouse your organization purchased 10 to 15 years ago. A solution that wasn’t much different when it first emerged in the 1990s, and hasn’t advanced much since first deployed in your data center.

You may also own or are considering an on-premises NoSQL solution such as Hadoop. This technology emerged just over a decade ago, challenging the very existence of the legacy data warehouse in order to accommodate the exponential increase in the volume, variety and velocity of existing and new data types.

Since the advent of Hadoop, many traditional data warehouse vendors now offer a cloud version of their on-premises solution. With all this said, there are now many more solution categories, and many new vendors, that you must consider for your next data warehouse purchase.

Herein lies the rub. Nearly all data warehouse purchase decisions, and all enterprise software decisions for that matter, take the form of a side-by-side-by-side, laundry-list comparison. That’s a significant amount of ground to cover, especially for data warehousing, which is four decades old. Your review of competing alternatives becomes even more protracted when the architectures and features of traditional and more modern products don’t align, which they never do.

Instead, think linear. Think like a historian. Many of the solutions you’ll consider are a response to the drawbacks, and benefits, of preceding, competing technologies. For example, at Snowflake, many of our customers had previously used a legacy data warehouse for years and then added a Hadoop solution to expand their data analytics platform. Unfortunately, that combination of technologies did not meet their ever-increasing requirements.

So, before you kick off a necessary, side-by-side comparison, consider your initial group of alternative technologies as building blocks, from bottom to top, from the oldest to the most recent. Then eliminate from contention those that do not add anything new or innovative. This approach enables you to focus on more recent technologies that truly deliver better value, and will enable you to continue to innovate well into the future. In the end, you’ll get to a shorter shortlist, and quickly, thus speeding your decision-making process and TTM by up 50 percent.

At Snowflake, we’ve done the hard work for you. We invite you to read this short but revealing ebook that details the benefits and drawbacks of each succeeding data analytics technology – from the birth of the legacy data warehouse, all the way to today’s modern, built-for-the-cloud data warehouse. We’re confident it will provide the insight you need to quicken your next data warehouse purchase by rapidly reviewing every technology that got us to where we are today.

Snowflake Vision Emerges as Industry Benchmark

Technology research and analysis firm Gigaom has ranked Snowflake as the #1 cloud data warehouse in a recent study. We surpassed enterprise data warehouse products including, Google BigQuery, Teradata, IBM dashDB, HPE Vertica, Microsoft Azure SQL, SAP HANA and Oracle Exadata. Snowflake emerged with a top score of 4.85 out of a possible 5.0. The competition averaged a score of 3.5. The six “disruption vectors” Gigaom used as its key scoring criteria are congruent with what we wanted to achieve back in the summer of 2012, when we started Snowflake.

But long before we wrote the first line of Snowflake code, we asked one another: “What should a data warehouse deliver that no other product has before? How can we enable organizations to make the best, data-driven decisions? And how will the world’s most powerful data warehouse help organizations achieve their existing goals and help reveal their future goals?” We then set out to answer those questions.

We wanted to enable organizations to easily and affordably store all of their data in one location, and make that data accessible to all concurrent users without degrading performance. We also wanted Snowflake to scale infinitely, with ease, and cost effectively so organizations would only pay for the compute and storage they used. And the product had to work with the tools that users already knew and loved. Finally, we wanted a data warehouse that required zero management by our customers – nothing to tweak, no tuning required. These defining qualities aligned with the new world of cloud services, and they are what formed the foundation of Snowflake.

What’s happened since the early days of Snowflake? We got to work, and we stuck to hiring the best engineers the world has to offer. We built Snowflake from the ground up, for the cloud, and incorporated all of these elements as the core of the product. In early 2015, we offered the first commercial version of Snowflake – the one and only data warehouse built for the cloud. Since then, our engineering team has added more and more industry-leading capabilities to Snowflake, leapfrogging the traditional data warehouse vendors.

Along the way, we’ve hired high-calibre teams to execute the sales, marketing and finance functions of the company so our customers and partners get the highest value from working with Snowflake. We also built a great customer support organization, providing the level of service our users love. In more recent times, we’ve expanded operations outside of North America to Europe, with Asia-Pacific and other regions coming online soon. We’ve also added Snowflake On Demand™ – the easiest way to get started with Snowflake by simply signing up on our website with just a credit card. All of these efforts over the past four years have led to Snowflake’s most recent inflection point – being chosen as the number one cloud data warehouse.

What does all this mean? Snowflake’s current and future customers have every opportunity to explore all of their data in ways they never thought possible. They can gain the insight, solve the problems and create the opportunities they simply couldn’t with their previous data platforms. We committed to building the world’s best data warehouse – the only data warehouse built for the cloud. Our customers, our partners and now the industry have indicated we’ve likely achieved what we set out to do back in the summer of 2012. Going forward, we’ll continue to serve our customers and partners with the best technology, the best solutions and the best services available.

Read the full report >

Migrating to the Cloud? Why you should start with your EDW

Many organizations we engage with are seriously considering transforming their business and moving some (or all) of their IT operations into the cloud. A lot of executives I have encountered are struggling with the same question: “How do I get started?” There is a strong case to be made that starting with your Enterprise Data Warehouse (EDW), or at least a data mart, is the fastest, and most risk-free path, with added upside potential to increase revenue and set you up for future growth. As operational data volumes continue to grow at exponential rates, it’s not a matter of if you go to the cloud to manage your enterprise data, but when.

Before going too far on your cloud journey, I would recommend an exercise in segmenting your business from an IT perspective in a very simple way. To get you started, let me suggest five possible categories, along with some risks to consider for each:

  • Customer-facing Applications – This is the heart and soul of your business. If something goes wrong, you lose business and revenue, and people potentially get fired. Risk: HIGH
  • Internal Applications – Mail, Payroll, General Ledger, AP, AR, things like that. Every person inside the organization relies on at least one of these services, and a lot of analysis needs to take place to figure out all the integration points to ensure nothing gets missed during a migration to the cloud. Risk: HIGH
  • Desktop/Laptop OS and Applications – There are whole books and schools of thought about how to migrate these, which means it’s a big decision and a big deal. Impacting everyone in the company on your first cloud initiative? Risk: HIGH
  • Operations Monitoring and Alerting – Got a Network Operation Center (NOC)? These guys are integrated with every system that is important, so moving them to the cloud could be a large undertaking. Risk: HIGH
  • Reporting and Analytics – Hmmm….if my constituents don’t get their weekly or monthly reports on time, is that a disaster? Can they get by with a small outage during the migration? Risk: LOW

Starting with the Data

Let’s take a closer look at why starting your cloud journey with your EDW could be a viable option, and even have some benefits that could help sell the idea (of the cloud) internally. In no particular order, I would highlight these points:

  • Doesn’t disrupt the business – Many EDW implementations are not mission critical today (as compared to enterprise applications). As more data becomes available through social media or Internet of Things (IOT) applications, businesses need access to much larger volumes of data and they will want access to it earlier in the data pipeline. Traditional DWs contain aggregations and are used for doing trend analysis, analyzing data over a period of time to make strategic, rather than tactical decisions. They are not architected to handle this new influx of raw data in a cost-effective manner. By starting your cloud journey with the EDW, you reduce risk (by going to a more flexible architecture) while getting your team early exposure to working with cloud services.
  • Doesn’t disrupt internal users – When moving to the cloud, you want to show incremental success and don’t want to add a lot of unnecessary risk. It’s simple to keep running your existing EDW in parallel with your new cloud DW, giving you a built-in fall-back plan for the early stages. Or you may decide to start with a small data mart as a pilot project.
  • Start-up costs are a fraction of on-premises, appliance solutions – Some of our customers invested as much as $10 million (or more) years ago on a data warehouse appliance that is now outdated technologically. And the renewal costs to keep that tech going are coming due. If they re-invest another huge sum of money, this will delay them getting to the cloud by another 4-5 years, putting them behind their competition. Rather than outlaying a large capital expenditure to extend the life of the older technology, it may make better sense to move to the cloud. The cloud offers a utility-based model, allowing you to pay for what you use and when you use it, as opposed to what you think you are going to need 2-3 years in the future. As a result, not only is the cost of entry lower, but you are not risking a huge sum of money to make the move.
  • Data is growing at an exponential rate – Will you ever have less data to worry about in your business? If you plan on being successful, I don’t think so. Many organizations are looking at new and different ways to manage and analyze ever-increasing volumes of data coming in various formats from multiple sources (such as semi-structured web logs). Your current on-premises EDW was not designed for this kind of workload or data.  If you are considering changing infrastructure platforms to accommodate it, why not select tools that were built for today’s modern data challenges instead of legacy-based architectures? Moving to the cloud also gives you the opportunity to consolidate operations and streamline business processes.
  • Enable new capability – There are some new analytic paradigms happening in the cloud (such as machine learning). Cloud-based platforms allow you to work with both detailed and aggregated data at scales never imaged (see the case study about DoubleDown as an example). Need to run a complex analytic job with a 256-node Massively Parallel Processing (MPP) cluster for an hour, and then shut it down? No problem. Can your platform support a thousand users without concurrency issues?  How would that change your business if it could dynamically adjust to handle those new demands?

As with any infrastructure move, the benefits have to be clear enough that the status quo mentality can be overcome and analysis paralysis doesn’t push out your journey to the cloud for months or even years. The beauty of the cloud model is that it is easy to start small and scale without risking a huge investment up front. Every business needs some proof before committing time and resources to move anything to the cloud and your EDW is a perfect candidate. Snowflake is the first and only EDW built for the cloud to be truly elastic for all of your analytic and big data needs.

Please feel free to reach out to us at We would love to help you on your journey to the cloud. And keep an eye on this blog or follow us on Twitter (@snowflakedb) to keep up with all the news and happenings here at Snowflake Computing.

Looking Back at 2016 Predictions

Last December, I made some predictions for 2016. As we approach the end of the year, I thought it only fair to look back and compare what I predicted to what has happened.

Do or die for big old tech

This was an easy one to get right. Big old enterprise tech companies are hunkering down and watching the world pass them by. HP and Dell are vying to be the king of legacy. There is money in this but who really wants to wear that crown?

IBM is trying to move on with Watson but can Ginni Rometty really pivot that aircraft carrier? And can Watson provide Jeopardy-winning answers for a variety of industries without an army of IBM consultants to spoon feed it? Only time will tell but there is reason to be skeptical.

At Oracle, Larry seems to have discovered the cloud (and will probably soon claim that he invented it). But he remains confused about what a cloud really is. When Oracle talks about Exadata Cloud Service, legacy hardware in a managed services datacenter, they demonstrate they’re still lost in the fog.

Overall, 2016 was not a good year for big old enterprise tech.

Public cloud wins, but who loses?

My prediction on the progress of private clouds was almost an understatement. This year, the move towards private clouds has been slower than molasses on a cold winter day. VMware continues to miss the mark, failing to deliver a cost-effective private cloud solution. And Openstack is a confusing grab bag that requires a huge SI investment, which is beyond the reach of almost all customers.

Meanwhile, almost every company, including most financial services, is now committed to adopting the public cloud. Amazon of course is the big winner but Microsoft has shown once again they will persevere and succeed. Last year, I picked Google as the wildcard. Diane Greene appears to have brought focus to Google and they clearly gained ground in 2016. Google possess the technical capability but they still need to get a lot more serious on the sales side as they have no enterprise experience. A recent query on LinkedIn shows 465 sales openings for Microsoft, 604 sales positions for Amazon, and only 85 open sales roles for Google cloud.  Google can’t compete against Amazon and Microsoft with just 85 more sales people.

The other major public cloud player that has emerged strong in 2016 is Alibaba. China cloud is set to explode in 2017. While it will be tough for Alibaba to gain traction in the US, in China it will almost certainly be the winning player.

All of the other public cloud wannabe’s are in a world of hurt. It looks like we’ll have four public clouds – Amazon, Microsoft, Google and Alibaba.

Spark divorces Hadoop

As I predicted last year, 2016 was not a good year for Hadoop and specifically for Hadoop distribution vendors. Hortonworks is trading at one-third its IPO price and the open source projects are wandering off. IaaS cloud vendors are offering their own implementations of the open source compute engines – Hive, Presto, Impala and Spark. HDFS is legacy in the cloud and is rapidly being replaced by blob storage such as S3. Hadoop demonstrates the perils of being an open source vendor in a cloud-centric world. IaaS vendors incorporate the open source technology and leave the open source service vendor high and dry.

Open source data analysis remains a complicated and confusing world. Wouldn’t it be nice if there were one database that could do it all? Wait, there is one, it’s called Snowflake.

What do Donald Trump and EU bureaucrats have in common?

Looking back at 2016, I guess not much. 2016 is a year that EU bureaucrats would rather forget and The Donald will remember forever.

On the privacy side, we saw some encouraging news with the creation of Privacy Shield. That said,  Privacy Shield is already being challenged and this space remains uncertain. On a purely positive note, Microsoft won the case in Ireland that prevents the US government from grabbing data stored in other countries. The ruling was critical for any U.S. cloud company that has a global footprint.

Perhaps the most encouraging thing from 2016 is that Europe has a full plate given the challenges of Brexit, a Donald Trump-led America, ongoing immigration issues and upcoming elections with strong populist candidates. Given these problems, concerns about privacy are likely to take a back seat so the bureaucrats may be content to stand behind Privacy Shield.

About that wall, Donald hasn’t said too much lately but I think we will see something go up on the border. He loves construction.

The True Value of Cloud Data Storage Continues to Emerge

We’re in interesting times. Like most significant trends, the data-driven economy revealed a powerful approach that was unique but always in plain sight. We listened and watched closely as experts across industries and different roles promulgated the benefits of capturing, storing and using data from every corner of cyberspace. And not far behind came a related and more interesting topic of connecting the offline world to capture previously unimagined amounts of data, ranging from kitchen appliances to jet engines. This we now know to be the Internet of Things (IoT).

We all acknowledged this data shift would change how companies do business and how we live our lives. As with all significant themes, comes additional thought on the ‘how’. Once we capture all of this data, how will we manage it? How will we effectively store and access petabytes of data, and more, so we can put that data to work?

These aren’t questions just for governments of the largest countries or for global enterprises. All organizations, from the garage start-up to mid-size companies are keen to harness the insight derived from more and more data. As wonderful as this seems, it all comes down to technology and cost. The cost of storing that data, and the technology to easily derive insight from data. But how does an organization accomplish this within their financial limits?

Our founders placed this at the heart of Snowflake. Before they typed the first line of code that ultimately brought the Snowflake cloud data warehouse to life, they wanted to enable data without limits. Snowflake’s built-for-the-cloud architecture truly separates compute from storage, allowing customers to easily scale either resource up and down. This also means Snowflake customers can focus their efforts on the highest value of data warehousing – compute. This is just one of many strategic advances, along with our unmatched technology, that makes Snowflake the most powerful and affordable data warehouse for all of an organization’s data warehousing and analytics.

With that said, Snowflake lowered its storage pricing in October to match Amazon’s S3 storage price. Today, Snowflake again lowered its price to match Amazon’s latest S3 price reduction. This strategy is a crucial component to truly realizing a data-driven world for all – data without limits. The amount of data the world creates continues to increase at an exponential rate. And to harness the insight from that data, organizations need the best technology at the best price. Snowflake has always been there and always will be.

To read more about our latest pricing announcement, click here.

Challenges and New Opportunities in Data Analytics

Fall is conference season in the industry, and this fall there has been no shortage of discussions and insights about data analytics at events both big and small. The Cloud Analytics City Tour has been a highlight here at Snowflake, but we’ve also seen the analytics conversation front and center at big conferences like Dreamforce.

The Challenges of Data Analytics

Our Cloud Analytics City Tour, now entering its home stretch, has brought together a diverse set of attendees, with small entrepreneurs sharing the room with people from some of the most established companies around. That diverse audience and the thought leaders who participated as speakers have provided some great discussion and insights.

For one, it’s clear that data analytics in the cloud has quickly become a topic of mainstream interest to organizations of all stripes and sizes. In fact, the conversation has moved on from “should we consider data analytics in the cloud at all” to “how do we figure out what to do in the cloud and how”?

That shift was reflected in some of the key themes and insights we’ve been hearing on the City Tour. Among those themes and insights:

  • The challenges are more than just technology. We heard repeatedly that one of the biggest challenges in cloud analytics is getting organizational buy-in. Even though acceptance of cloud has grown, getting people to do things differently still takes a lot of work.
  • Data integration and analytics now need to be a continuous process. The batch, scheduled approach to making updated data and analytics available no longer meets the needs people have today. Continuous data integration is becoming vital as organizations look to drive agile, data-driven decision-making throughout their organizations.
  • Finding great analytics people remains hard. The “people issue” – finding the right talent for analyzing data, is now even more urgent. However, it’s still hard to solve even as a greater number of people become data savvy.
  • Data quality still matters. While the technology to manage large and disparate sets of data is far more accessible in part because of the cloud, the quality of the data is still a challenge – how do you verify and normalize the data as quickly as your system can deliver and parse it?

Bringing Data Analytics to All

The importance of data analytics was also front and center at other conferences. At Dreamforce, the former Salesforce CRM conference that has now evolved into a much broader event encompassing wide-ranging business and technical topics, data-driven decision making for competitive advantage was a key theme. However, the conversation at Dreamforce has evolved from last year’s spotlight on the importance of using “big data” to a focus this year on how the nature of this data is changing, and on how to practically use more of the new types of data in everyday decision-making without being overwhelmed by its complexity.

What was most interesting about this discussion was that there were clearly two camps: increasingly sophisticated organizations with access to the skills and resources to be able to apply the latest data analytics approaches, and organizations that do not have in place or within reach the skills and resources to enable data-driven decision-making for greater insight. Those deep-pocketed enterprises who are rebuilding their entire infrastructures with the help of consultants like Accenture are leap-frogging into new productive use cases and revolutionary advances in deep learning.

The result is that well-funded start-ups who can attract highly skilled resources (and who can start from scratch) and those deep-pocketed enterprises who are rebuilding their entire infrastructures with the help of consultants like Accenture threaten to leapfrog the millions of organizations stuck in the middle who may know what they want to do with data and analytics, but don’t know how to get there. To add to the complexity, not only the technical infrastructure but the mindset within the organization and across departments needs to change.

For organizations across that spectrum, new solutions have emerged. Salesforce’s announcement of Einstein, a data analysis solution for data in Salesforce systems, is one example. But even more importantly, cloud analytics and systems designed to support it are making analytics accessible to more than just the well-resourced 1% of organizations.

As we have learned from the nimble companies that have gone from startup to billion-dollar unicorn in the last five years, thinking and operating in the cloud is the ultimate enabler. For more established companies hindered by legacy systems, changing the technology is now the easy part with solutions such as Snowflake available. But the rewards in overcoming these cultural and process barriers are invaluable to any organization that doesn’t want to be left behind in this next wave data revolution.

To connect with like-minded revolutionaries and learn more about how to move your organization’s data sophistication to the next level, join us at one of our next Data Analytics forums, including this week’s event in San Francisco as well as upcoming events in Chicago and Los Angeles. The best learning happens in person, and we hope you have or will take advantage of our Cloud Analytics City Tour as a great forum for intelligent discussions and meaningful insight.

The most powerful and easy-to-use data warehouse is now the most affordable

When I joined Snowflake nearly three years ago, I knew the seasoned engineers who envisioned and developed the product were on to something big. I’ve spent nearly my entire career in the database business but hadn’t seen anything comparable to Snowflake – a built-for-the-cloud data warehouse that’s powerful and easy to use for analyzing all of your data.

Snowflake has now taken another leap forward for our customers. We’ve launched three key initiatives to enable organizations to easily and affordably store and analyze all of their data in one location: Snowflake. No more data silos, data lakes or duplicate systems to manage the volume, variety, velocity and cost of today’s data.

Our first initiative upends the traditional cost structure that has made the data warehouse a precious, limited resource in the past. We’re making Snowflake the most affordable data warehouse available by removing the cost of data storage as a barrier to bringing together all data in one place. Taking advantage of our unique architecture, we’ve lowered our storage price to as low as $30/TB/month, which represents a 75 percent storage cost savings to our customers. The most powerful and easy-to-use data warehouse is now the most affordable. Thanks to our built-for-the-cloud architecture, specifically our separation of compute and storage, we’re able to price storage at the same price as Amazon S3 – Snowflake’s cloud storage provider.

Why have we done this? Our customers continue to tell us how much Snowflake has changed the way they use data. But they’re understandably concerned about the cost to store much larger volumes of data in Snowflake. Therefore, we’ve changed our storage pricing to help customers focus on the much more important process: analyzing data for insights.

Secondly, we’ve added a quick and simple way to get up and running with Snowflake. Snowflake On Demand allows customers to start using Snowflake with just a credit card and a simple sign-up process via our website. Data users of all types can now experience the many benefits of Snowflake without friction or delay.

And thirdly, Snowflake has launched is newest deployment in the Frankfurt (EU) region. Organizations with EU headquarters, and multinational organizations with EU operations can now keep their data in the EU. This is huge for any organization keen to keep their EU data close to home, and any for organization that wants to advance their global data initiatives from region to region.

All of these offerings will be available in November.

At Snowflake, we love to hear what our customers have to say. Their input, and our hunger to deliver and evolve the best data warehouse available, drive everyone at Snowflake to serve our customers the best way possible. Snowflake’s technology, solutions and customer-centric strategy will never cease to evolve. We look forward to continuing our mission of helping organizations advance their operations, serve their customers and lead their industries with the insight derived from data without limits.

For more details on all of Snowflake’s newest initiatives, view Snowflake’s main announcement and our individual announcements on our price reduction, Snowflake On Demand and Snowflake’s deployment in Frankfurt.

As always, keep an eye on this blog site and our Snowflake-related Twitter feeds (@SnowflakeDB) for more interesting things about Snowflake, and for updates on all the latest action and activities here at Snowflake Computing.

Cloud Analytics: Sharing Information, Insights and Innovations

While it’s been a busy summer here at Snowflake, we are now picking up even more momentum as we head into the fall. In addition to showcasing our cloud data warehouse at industry conferences such as the upcoming Strata + Hadoop World conference in New York, our team will be hitting the road for our fall Cloud Analytics City Tour, kicking off next week in New York and Boston. (You can see the full fall schedule and register for a city near you here.)

To provide some background, this past spring Snowflake invited data professionals to participate in cloud analytic symposiums held in Chicago and Los Angeles. Attendees included a broad array of people interested in innovation in analytics. Our goal was to create a forum where a diverse set of cloud and data analytics professionals could discuss how and why the cloud is playing an increasingly prominent role in analytics and share experiences and recommendations for managing and using data in the cloud.

The value-added knowledge that we saw exchanged at the event and the feedback we received from attendees demonstrated the value of a forum where data professionals could learn and share information and ideas. Speakers shared insights about how the cloud is not simply about efficiencies that save time and budget (one of the first things people think of when it comes to cloud solutions in general), but about how cloud is creating new opportunities for data users to experiment, innovate and to make exponential progress in their work. We saw speakers talk about how cloud is causing a sea change in the status quo, creating new opportunities for a broader array of data users to experiment, innovate and make exponential progress in putting data to work.

The Creative Destruction Cycle in Analytics

Dean Abbott, co-founder and Chief Data Scientist of SmarterHQ and a speaker at our symposium in Los Angeles, described this change as making possible a rapid and continuous cycle of creation, destruction and re-creation that is enabling him and his team of data scientists to test and iterate on the fly. In on-premises data analytics environments, that team would have been required to do careful planning well in advance because of the upfront investment required to make sure that the appropriate resources were purchased and deployed to support projects. In the cloud, environments can be created, populated, used, and destroyed on the fly, making it easy to experiment and iterate rapidly. This different way of working is giving his team’s work added relevancy as a result of being able to test more hypotheses within the same budget and time frame. Moreover, because of the simpler, more agile environment that the cloud enables, the team has access to fresher data, making it possible for new data from the field and other sources to be quickly normalized, integrated and examined against historical data in a way that had not been possible prior to the use of cloud services for data handling and report generation.

This continuous, iterative cycle of data experimentation is vastly different from the traditional lock-step and labor intensive framework in which data users labored and debated incessantly over the hypotheses that they would be testing before undertaking the report generation phase. That latter phase itself had before taken days, even weeks to complete. And even before that, IT had to choose the technology and systems to run the reports before the data had even been examined. Before the advent of the cloud data warehouse and cloud analytics tools, ambiguities, inconsistencies and incongruities were common and difficult to test. The work flow process of the data analysts contained a substantial amount of guesswork, with gaps and delays developing between what the scientists already knew, what data still needed to be tested, and any new data coming in.

You Have All That Data, Now What Do You Do With It?

Tamara Dull, Director of Emerging Technologies at SAS and #13 on the Big Data 2015: Top 100 Influencers in Big Data list, pointed out that cloud has made utilizing the benefits of a data warehouse more accessible to a wider diversity of types and sizes of organizations than in the past. This new accessibility is not only improving data management and enhancing security but also, similar to the experience shared by Dean Abbott, creating new opportunities in data discovery and providing a platform for advanced analytics.

In the past, when data was more homogenous and there were fewer data sources, new and old data could be integrated via complex data integration pipelines, carefully planned data warehouses and sometimes some very large Excel spreadsheets. But with the advent of new data sources and formats such as web application data, mobile user data, and now IoT data streams, traditional systems can no longer keep up. The result is that these gaps have been getting bigger, potentially at great cost to accuracy and effectiveness for those still using these old systems.

It was clear from the presentations and discussions that a variety of organizations from revolutionary start-ups to reinvented Fortune 500’s are building and rebuilding their data-driven operations in the cloud to ensure that their data management infrastructures are as flexible as the incoming data. The outcome of this new paradigm is that while approaches and methodologies for using data can differ vastly between organizations, storing and using data in the cloud opens up exciting new possibilities for data analytics. These changes aren’t just better for business, but have become a requirement for thriving in an increasingly data-driven world.

Coming Up Next

We’re looking forward to more discussions and insights in this fall’s City Tour.  We hope you’ll join us to share in the discussion and add your own insights.