The Data Sharehouse brings forth a new market

When I mention data sharing to customers, they often say “really”? From that moment forward, the discussion is no longer about replacing their data analytics platform. It’s about growth. Growth of their business, growth of their ecosystem, and growth from the limitless possibilities of sharing live data in a matter of minutes.

Today, we announced the most significant breakthrough of Snowflake’s data warehouse built for the cloud – Snowflake Data Sharing. It extends our data warehouse to what we call the data sharehouse.

The revolutionary architecture of Snowflake paves the way for the data sharehouse. All of the unique benefits Snowflake provides inside the enterprise extends the data warehouse outside the enterprise. No other technology offers such a quick, powerful and inexpensive way to share live data between organizations. A data sharing model that provides read-only access to an enterprise’s entire data warehouse, or just a secure slice of data. No copying and no moving of data required.

The commercial use of sharing data emerged from Nielsen Corporation in 1923. Over the next century, enterprises adopted different data sharing models but with minimal success. Even the digital data sharing methods they’re forced to use today haven’t changed much. But the data sharehouse enables one-to-one, one-to-many and many-to-many data sharing models. And since Snowflake was built for the cloud, the opportunities to share data are endless.

Only when a truly unique product or service emerges, one that is innovative and appeals to nearly every enterprise, that’s when a new market is born. And with the data sharehouse, a true market for sharing data sharing has begun. Unlike other markets, Snowflake has focused on building the infrastructure, the platform, for enterprises to do business with each other.

And unlike data consortiums, which define the interactions between companies and impose fees on transactions, Snowflake data sharing is open to all organizations, removing another barrier that inhibits enterprises from accessing limitless data. The business opportunity that data sharing enables is fully owned by the data providers and consumers. Snowflake is focused on providing the data sharehouse that enables data sharing, without any entanglement in the business of the data providers and consumers. At its core, Snowflake is a database, not a marketplace.

With Snowflake Data Sharing, organizations will share more data with the partners in their ecosystem to improve business efficiencies. They’ll determine that some of their data is just as valuable to other, non-competing companies. And they’ll inquire from other organizations about data they don’t have, and negotiate access to that data. Live data sharing also means that enterprises with vast landscapes populated with dozens or even hundreds of disparate data silos, acquired by years of growth and acquisition, will be able to unite nearly all of their data.

As exciting as this is, what will truly astonish is what’s possible with the data sharehouse that we have yet to imagine. Modern data sharing will enable organizations across industry and across the globe to imagine new ways of doing business, new ways to solve longstanding problems, and provide new insights into manufacturing, healthcare, science and humanitarian issues, to name a few. Until now, there was no easy way to connect enterprises with one another through data. Well, those days are over. The data sharehouse has arrived.

Growing Snowflake to help you end your struggle for data

In business, your impact directly correlates with the problems you strive to solve. Snowflake’s founders gathered in 2012 to unravel a prevalent and persistent struggle that has hindered organizations ever since OLTP systems emerged five decades ago. Data professionals continue to ask, “How can we derive timely, business insight from all the data available to our enterprise?”

Their struggle is bigger today than it has ever been. The exponential increase in available data in the past 10 years has outpaced the improvements in legacy data warehouse solutions for on-premises and cloud environments. It’s also the reason why Snowflake emerged, and why our customers, technology partners and funding partners continue to invest in us.

With that said, I’m proud to announce Snowflake’s latest round of $100 million of growth funding led by ICONIQ Capital and accompanied by Madrona Venture Group. Snowflake’s Series D round also includes all of Snowflake’s existing funding partners: Altimeter Capital, Redpoint Ventures, Sutter Hill Ventures and Wing Ventures. Since its founding in 2012, Snowflake has raised a total of $205 million in funding.

The cloud has enabled enterprises of all types and sizes access to unlimited data. But therein also lies the struggle: Nearly every enterprise has many on-premises and cloud-based data silos they struggle to bring together. How do you integrate all of those varying forms of data and make all of that data available for deep analysis by all your users without degrading performance? For many customers, it’s difficult to even find the relevant data.

Snowflake is designed to make the struggle for data disappear. Our revolutionary, built-for-the-cloud architecture and proven technology have delivered up to 200 times faster performance for our growing customer base, and at one-tenth the cost of their previous data warehouse solutions. We’ve also enabled customers to launch Snowflake within just a few months, and in many instances, just a few weeks. But all of that, and more, are just the beginning.

With this latest funding round, Snowflake continues our pursuit to help transform every organization into a data-driven enterprise. We are outpacing all of our competitors by enabling customers to do things with data they had never envisioned. Snowflake is solving problems that Oracle, Teradata, Hadoop, and other cloud data warehouses can’t even begin to deliver: The performance, concurrency and simplicity needed to analyze all your data, from one location, by all your users.

Our new funding will also enable a host of initiatives to expand Snowflake’s operations to serve our growing customer base in the US and around the globe. Our UK office is expanding rapidly, with plans to establish offices across Europe and in Asia Pacific. We’ve also expanded our engineering operations with an office in Bellevue, Washington, allowing Snowflake to tap into the best and brightest talent of the Seattle area.

Snowflake is helping enterprises to end their struggle for data. Our rapidly growing base of customers and partners confirms this. And our latest round of growth funding further validates, and enables, Snowflake’s pursuit of the data-driven enterprise. So, be on the lookout, Snowflake is coming your way. And we want to help you end your struggle for data.

Looking Back at 2016 Predictions

Last December, I made some predictions for 2016. As we approach the end of the year, I thought it only fair to look back and compare what I predicted to what has happened.

Do or die for big old tech

This was an easy one to get right. Big old enterprise tech companies are hunkering down and watching the world pass them by. HP and Dell are vying to be the king of legacy. There is money in this but who really wants to wear that crown?

IBM is trying to move on with Watson but can Ginni Rometty really pivot that aircraft carrier? And can Watson provide Jeopardy-winning answers for a variety of industries without an army of IBM consultants to spoon feed it? Only time will tell but there is reason to be skeptical.

At Oracle, Larry seems to have discovered the cloud (and will probably soon claim that he invented it). But he remains confused about what a cloud really is. When Oracle talks about Exadata Cloud Service, legacy hardware in a managed services datacenter, they demonstrate they’re still lost in the fog.

Overall, 2016 was not a good year for big old enterprise tech.

Public cloud wins, but who loses?

My prediction on the progress of private clouds was almost an understatement. This year, the move towards private clouds has been slower than molasses on a cold winter day. VMware continues to miss the mark, failing to deliver a cost-effective private cloud solution. And Openstack is a confusing grab bag that requires a huge SI investment, which is beyond the reach of almost all customers.

Meanwhile, almost every company, including most financial services, is now committed to adopting the public cloud. Amazon of course is the big winner but Microsoft has shown once again they will persevere and succeed. Last year, I picked Google as the wildcard. Diane Greene appears to have brought focus to Google and they clearly gained ground in 2016. Google possess the technical capability but they still need to get a lot more serious on the sales side as they have no enterprise experience. A recent query on LinkedIn shows 465 sales openings for Microsoft, 604 sales positions for Amazon, and only 85 open sales roles for Google cloud.  Google can’t compete against Amazon and Microsoft with just 85 more sales people.

The other major public cloud player that has emerged strong in 2016 is Alibaba. China cloud is set to explode in 2017. While it will be tough for Alibaba to gain traction in the US, in China it will almost certainly be the winning player.

All of the other public cloud wannabe’s are in a world of hurt. It looks like we’ll have four public clouds – Amazon, Microsoft, Google and Alibaba.

Spark divorces Hadoop

As I predicted last year, 2016 was not a good year for Hadoop and specifically for Hadoop distribution vendors. Hortonworks is trading at one-third its IPO price and the open source projects are wandering off. IaaS cloud vendors are offering their own implementations of the open source compute engines – Hive, Presto, Impala and Spark. HDFS is legacy in the cloud and is rapidly being replaced by blob storage such as S3. Hadoop demonstrates the perils of being an open source vendor in a cloud-centric world. IaaS vendors incorporate the open source technology and leave the open source service vendor high and dry.

Open source data analysis remains a complicated and confusing world. Wouldn’t it be nice if there were one database that could do it all? Wait, there is one, it’s called Snowflake.

What do Donald Trump and EU bureaucrats have in common?

Looking back at 2016, I guess not much. 2016 is a year that EU bureaucrats would rather forget and The Donald will remember forever.

On the privacy side, we saw some encouraging news with the creation of Privacy Shield. That said,  Privacy Shield is already being challenged and this space remains uncertain. On a purely positive note, Microsoft won the case in Ireland that prevents the US government from grabbing data stored in other countries. The ruling was critical for any U.S. cloud company that has a global footprint.

Perhaps the most encouraging thing from 2016 is that Europe has a full plate given the challenges of Brexit, a Donald Trump-led America, ongoing immigration issues and upcoming elections with strong populist candidates. Given these problems, concerns about privacy are likely to take a back seat so the bureaucrats may be content to stand behind Privacy Shield.

About that wall, Donald hasn’t said too much lately but I think we will see something go up on the border. He loves construction.

The True Value of Cloud Data Storage Continues to Emerge

We’re in interesting times. Like most significant trends, the data-driven economy revealed a powerful approach that was unique but always in plain sight. We listened and watched closely as experts across industries and different roles promulgated the benefits of capturing, storing and using data from every corner of cyberspace. And not far behind came a related and more interesting topic of connecting the offline world to capture previously unimagined amounts of data, ranging from kitchen appliances to jet engines. This we now know to be the Internet of Things (IoT).

We all acknowledged this data shift would change how companies do business and how we live our lives. As with all significant themes, comes additional thought on the ‘how’. Once we capture all of this data, how will we manage it? How will we effectively store and access petabytes of data, and more, so we can put that data to work?

These aren’t questions just for governments of the largest countries or for global enterprises. All organizations, from the garage start-up to mid-size companies are keen to harness the insight derived from more and more data. As wonderful as this seems, it all comes down to technology and cost. The cost of storing that data, and the technology to easily derive insight from data. But how does an organization accomplish this within their financial limits?

Our founders placed this at the heart of Snowflake. Before they typed the first line of code that ultimately brought the Snowflake cloud data warehouse to life, they wanted to enable data without limits. Snowflake’s built-for-the-cloud architecture truly separates compute from storage, allowing customers to easily scale either resource up and down. This also means Snowflake customers can focus their efforts on the highest value of data warehousing – compute. This is just one of many strategic advances, along with our unmatched technology, that makes Snowflake the most powerful and affordable data warehouse for all of an organization’s data warehousing and analytics.

With that said, Snowflake lowered its storage pricing in October to match Amazon’s S3 storage price. Today, Snowflake again lowered its price to match Amazon’s latest S3 price reduction. This strategy is a crucial component to truly realizing a data-driven world for all – data without limits. The amount of data the world creates continues to increase at an exponential rate. And to harness the insight from that data, organizations need the best technology at the best price. Snowflake has always been there and always will be.

To read more about our latest pricing announcement, click here.

Challenges and New Opportunities in Data Analytics

Fall is conference season in the industry, and this fall there has been no shortage of discussions and insights about data analytics at events both big and small. The Cloud Analytics City Tour has been a highlight here at Snowflake, but we’ve also seen the analytics conversation front and center at big conferences like Dreamforce.

The Challenges of Data Analytics

Our Cloud Analytics City Tour, now entering its home stretch, has brought together a diverse set of attendees, with small entrepreneurs sharing the room with people from some of the most established companies around. That diverse audience and the thought leaders who participated as speakers have provided some great discussion and insights.

For one, it’s clear that data analytics in the cloud has quickly become a topic of mainstream interest to organizations of all stripes and sizes. In fact, the conversation has moved on from “should we consider data analytics in the cloud at all” to “how do we figure out what to do in the cloud and how”?

That shift was reflected in some of the key themes and insights we’ve been hearing on the City Tour. Among those themes and insights:

  • The challenges are more than just technology. We heard repeatedly that one of the biggest challenges in cloud analytics is getting organizational buy-in. Even though acceptance of cloud has grown, getting people to do things differently still takes a lot of work.
  • Data integration and analytics now need to be a continuous process. The batch, scheduled approach to making updated data and analytics available no longer meets the needs people have today. Continuous data integration is becoming vital as organizations look to drive agile, data-driven decision-making throughout their organizations.
  • Finding great analytics people remains hard. The “people issue” – finding the right talent for analyzing data, is now even more urgent. However, it’s still hard to solve even as a greater number of people become data savvy.
  • Data quality still matters. While the technology to manage large and disparate sets of data is far more accessible in part because of the cloud, the quality of the data is still a challenge – how do you verify and normalize the data as quickly as your system can deliver and parse it?

Bringing Data Analytics to All

The importance of data analytics was also front and center at other conferences. At Dreamforce, the former Salesforce CRM conference that has now evolved into a much broader event encompassing wide-ranging business and technical topics, data-driven decision making for competitive advantage was a key theme. However, the conversation at Dreamforce has evolved from last year’s spotlight on the importance of using “big data” to a focus this year on how the nature of this data is changing, and on how to practically use more of the new types of data in everyday decision-making without being overwhelmed by its complexity.

What was most interesting about this discussion was that there were clearly two camps: increasingly sophisticated organizations with access to the skills and resources to be able to apply the latest data analytics approaches, and organizations that do not have in place or within reach the skills and resources to enable data-driven decision-making for greater insight. Those deep-pocketed enterprises who are rebuilding their entire infrastructures with the help of consultants like Accenture are leap-frogging into new productive use cases and revolutionary advances in deep learning.

The result is that well-funded start-ups who can attract highly skilled resources (and who can start from scratch) and those deep-pocketed enterprises who are rebuilding their entire infrastructures with the help of consultants like Accenture threaten to leapfrog the millions of organizations stuck in the middle who may know what they want to do with data and analytics, but don’t know how to get there. To add to the complexity, not only the technical infrastructure but the mindset within the organization and across departments needs to change.

For organizations across that spectrum, new solutions have emerged. Salesforce’s announcement of Einstein, a data analysis solution for data in Salesforce systems, is one example. But even more importantly, cloud analytics and systems designed to support it are making analytics accessible to more than just the well-resourced 1% of organizations.

As we have learned from the nimble companies that have gone from startup to billion-dollar unicorn in the last five years, thinking and operating in the cloud is the ultimate enabler. For more established companies hindered by legacy systems, changing the technology is now the easy part with solutions such as Snowflake available. But the rewards in overcoming these cultural and process barriers are invaluable to any organization that doesn’t want to be left behind in this next wave data revolution.

To connect with like-minded revolutionaries and learn more about how to move your organization’s data sophistication to the next level, join us at one of our next Data Analytics forums, including this week’s event in San Francisco as well as upcoming events in Chicago and Los Angeles. The best learning happens in person, and we hope you have or will take advantage of our Cloud Analytics City Tour as a great forum for intelligent discussions and meaningful insight.