Results

IoT and Cloud Computing; the killer combination for Services; But not always a well defined relationship!

IoT and Cloud Computing; the killer combination for Services; But not always a well defined relationship!

It has been stated that IoT is the killer App for Cloud Computing, but given that the definition of Cloud Computing itself is often difficult, this seems to add to the difficulty of defining Cloud based IoT Services. Add the fact that just about every IoT offering seems to be called a Platform, even those that are genuine Platforms, and understanding relationship between Clouds and different IoT capabilities becomes even more complex.

The Internet of ‘Things’, meaning the connection and interaction of intelligent devices, produces a flow of Data triggered by current ‘Events’ as unpredictable activity stream. The unpredictable nature of the computational services requirement relates directly to one of the major benefits of Cloud Computing. In reality Clouds are just further examples of the ‘Things’ that are part of IoT.

The business value doesn’t necessarily directly reside in either Clouds, or IOT, but in the integration to create Services as the Business valuable outcome.

Unsurprisingly the similarities between IoT and Clouds are many, for as the opening paragraph suggests they are inherently part of the same technology environment. Both are part of the same Network Based Architecture that ‘functions’ in response to a particular demand, rather than predicted steady loading of IT style applications. Adding that both are deployed as a mixture of Public, Private, Hybrid and Community Business solutions and the commonality two technologies should be recognizable, even if the exact definitions are less so.

Mixing the diversity of classifications for Cloud Computing deployment Service models with the classification of IoT enabled Business Services offers too so possible combinations to support any simple labeling. But not all, in fact currently perhaps most Cloud based provisioning, is not deployed to support an IoT integrated Business Service. It simplifies classifications to recognize this and remove many Cloud Services from being considered as part of IoT.

Commercial Cloud Service provisioning models were originally designed to be a cost reduction replacement of traditional computing resources in an Enterprise Data Centre, or to run occasional high loads such as Data Rich Analytics. As the advantages offered by Cloud Data Centre models became better understood definable three major hosting, or provisioning, models started to dominate; Infrastructure as a Service (IaaS), Platform as a Service (PaaS) or Software as a Service (SaaS). Each model could be used in a Private, Public or Hybrid deployment.

IoT solutions that are using Clouds don’t fit so readily to these definitions with their more predictable demands and foreseeable operational consumption of Mips.  In contrast IoT solutions which are circumstance, or event driven activities, often a mass of small consumption demands created by evoking specific services to run in different combinations are radically different. So different in fact that it is better to regard them as a fourth hosting model with some Cloud Service providers offering different hosting and charging models.

To understand this in more detail it is necessary to look more closely at the types of IoT and Smart Services being run on Cloud Hosting companies. There are Web Sites that attempt to list all, or at least as many as possible, Cloud Based IoT Service providers, (see appendix for some examples). This blog is focused on noting the impacts of IoT on commercial Cloud hosting.

There is a further clarification necessary to point out that the IoT Cloud Services referred to in this blog all offer capabilities with direct Business value. Unlike the previous blog on large-scale infrastructural IoT connectivity Services Platforms which will interconnect and integrate hundreds of thousands to millions of IoT Devices to various Business Services referred to earlier in this blog.

However it should be noted that current smaller scale, or dedicated single function solutions, IoT Business Services do make direct connections to their dedicated cluster of IoT Devices to collect event data.

So IoT poses an interesting commercial challenge to the providers of the basic Cloud computational capacity. The IoT enabled Services are light, but frequent users of Cloud processing power in a way that doesn’t fit with conventional Cloud hosting tariffs expectation longer usage periods at moderate to heavy computational power usage.

Amazon Web Services, AWS, provides a published tariff as an example, (there are of course similar offers from their competitors), of charging on the basis of events handled, and in what manner.

However AWS, just as other large Cloud Service providers are keen to offer more than just compute power, however they charge for it. Instead they are aiming to re-bundle IoT offerings to merge compute power with direct IoT Business Services with a much higher value Business outcome. These may take the form of  IoT Service Templates, as in this Microsoft Azure example, aimed at particular Industry sectors, or logic templates to simplify defining unique IoT services. Either way the aim is to offer ‘pay as you profit’ simplicity whilst avoiding the complexity/cost for an Enterprise building their own IoT Services.

In line with this shift to new commercial models for IoT a growing number of IoT Cloud Service providers offering sophisticated high value Services charged on the basis numbers of events handled. (Salesforce Thunder Cloud for IoT is an example of approaching IoT from the direction of Business outcomes). As events relate directly to business valuable outcomes and insights this is generally seen as a welcome move by both the customer and the provider, but it does require some care in establishing the exact cost model as few customers have any real experience to negotiate on the expected numbers.

Charging on the basis of the received Business Value outcomes is at an early stage of innovation, as the example of IBM Watson IoT Service working in conjunction with Aerialtronics drones. Equipment installed in inaccessible places, such telecommunication towers, can now have an annual survey by a drone with the results interpreted by IBM Watson visual recognition.  A straight cost substitution of the cost of a manual visit to the lower cost of a drone visit, but notice little to nothing in this is concerned with IoT event driven activities.

More conventional IoT examples around preventative maintenance come from several sources, as an example Software Ag, and allow clearer and more immediate linkage to business beneficial value. Finding and fixing potential failures at a convenient time with the cost saving of expensive failure makes this a particularly popular IoT business target.

The coming together of Cloud Computing and IoT certainly creates a new era of technology capabilities, with genuinely new business value. Add in Augmented Intelligence, AI, and the stage is set to deliver market transformation to the Digital Economy. Combined Clouds and IoT lead a move away from ‘transaction recording’ Enterprise Application IT and into a Business Services driven ‘read and react’ to opportunities capability. The result is a new generation of business models offering new innovative competitive abilities.

However, this new Business environment is still in its early stages, and, as such the commercial terms for outcome value based charging is still emerging. Buyers of IoT Services are buyers of Cloud hosting Services, but that doesn’t mean on the same terms as those for Enterprise Application IT. At this early stage of the IoT enabled Business Services model those responsible for deployment should take a very careful look at the various commercial options being offered.

 

Appendix; Three sites that list Cloud Based IoT Service Providers

https://iot-analytics.com/product/internet-of-things-company-list-2015/

http://www.iot-directory.com/

http://postscapes.com/internet-of-things-platforms/

 

Data to Decisions Matrix Commerce Next-Generation Customer Experience Future of Work Innovation & Product-led Growth New C-Suite Tech Optimization Chief Customer Officer Chief Information Officer Chief Digital Officer

Announcing Constellation Research's 2016 Enterprise Awards

Announcing Constellation Research's 2016 Enterprise Awards

Constellation Research hopes you all have a great holiday season this year with friends and family. In the spirit of recognizing memorable achievements in the tech industry, we also want to use this occasion to announce the winners of Constellation’s first annual Enterprise Awards.

The winners were selected through a combination of internal voting and heated debate among Constellation’s analyst team. Each category also includes a number of runner-up winners, as there were so many deserving of recognition. We hope you enjoy reading the results and welcome your comments—approving, dissenting and otherwise.

Best Enterprise Software Startup

Winner: Accompany

Why did it win?: Accompany describes its app as a “digital chief of staff” that pulls together a user’s contacts, email, social channels and other data into one place. The concept has been tried before but Accompany stands out for its excellence of execution.

Runner-Up Winners: General Electric, Coupa, X.ai.

Why did they win?: GE is an enterprise software startup of a different scale, using aggressive acquisitions and partnerships to grow the business fast. Coupa, once labeled a Silicon valley 'unicorn,' may have shed that label for good thanks to a wildly successful IPO. Moreover, its spend management software has become a standard in the Fortune 500. X.ai provides an AI-driven personal assistant that hones in on a key pain point: Scheduling meetings.

Best Enterprise Software Vendor

Winner: Amazon Web Services

Why did it win?: While known for infrastructure and PaaS (platform as a service) rather than applications, AWS had a breakout year, gaining remarkable endorsements from the world’s biggest enterprise customers and attracting application workloads from SAP, Workday and other top vendors. It also rolled out an array of new analytics and AI services that look to be likely winners.

Runner-up winners: Microsoft, Salesforce, Oracle.

Why did they win?: Microsoft continued its steady push into cloud apps and infrastructure, innovated in AI, widened its embrace of open source and made the year’s most daring acquisition with the $26.2 billion purchase of LinkedIn.

Oracle also made moves in cloud across all three layers of the stack while continuing its laser focus on database innovation, industry verticals and aggressive M&A, capped off by its $9.3 billion acquisition of NetSuite.

Salesforce had another high-growth year and is cruising toward its $10 billion in annual revenue goal. While Salesforce lost out to Microsoft in acquiring LinkedIn, it innovated in AI with the launch of Einstein, while the largest-yet Dreamforce event showed customer and partner engagement is at an all-time high.

Best Enterprise Systems Integrator

Winner: Luxoft

Why did it win?: While far from the world’s largest SI, with about 11,500 employees, Luxoft stands out for its recent focus on pursuing innovation-driven projects. It’s not a company looking for growth by simply trying to squeeze more cash out of legacy SI business models.

Runner-up Winners: Accenture, Wipro

Why did they win?: Accenture went on an ambitious acquisition tear in 2016, with deals targeting machine learning, CRM, cybersecurity, boutique SaaS consultancies, creative agencies and more. Wipro continued to mature its Holmes AI platform and is using it to automate mundane coding tasks on fixed-price projects—an idea with value for both Wipro and clients.

Best Enterprise Software Acquisition

Winner: Microsoft-LinkedIn

Why did it win?: Redmond’s move to acquire LinkedIn represents a $26.2 billion bet on the value of curated business social network data in conjunction with Microsoft’s enterprise applications and Office. Constellation sees vast potential in this combination if executed correctly. Microsoft has had plenty of dud acquisitions on its track record—such as Nokia—but Constellation is confident this won’t be the case with LinkedIn.

Runner-up Winners: Dell-EMC, Oracle-NetSuite, Salesforce-Demandware

Why did they win?: While largely a consolidation play, Dell's $67 billion merger with EMC will have ramifications for hundreds of thousands of customers around the world. Oracle's purchase of NetSuite was somewhat controversial given Larry Ellison's stake in the cloud ERP vendor, but gives Oracle needed scale for its cloud business and an entry point with SMBs. Salesforce's Demandware buy gives it the commerce cloud that was sorely lacking from its lineup.

Best Enterprise Partnership

Winner: Amazon-VMware

Why did it win?: This deal, which will see VMWare adopt AWS as its public cloud option, is one with strong benefits for both the vendors involved and the many VMWare-centric enterprises that want to maintain those workloads without having to continually invest in new hardware. VMware shops will also be able to take advantage of the continuous drip of new capabilities that emerges from the AWS machine.

Runner-up Winners: SAP-Microsoft, the Partnership on AI

Why did they win?: SAP’s partnership with Microsoft is one of the software industry’s most venerable, and its continued success and growth are important for thousands of joint customers. This year the vendors made progress on cloud (HANA, SuccessFactors to Azure), Office 365 integration and many other areas.

Amazon, Google, Facebook, IBM and Microsoft came together around the Partnership on AI, an effort that will seek to create best practices and educate the public. While each company stands to gain individually from the effort, the broader public should as well.

Best Enterprise Software Innovation

Winner: Google, for TensorFlow

Why did it win?: TensorFlow is the second generation of Google’s machine learning technology, powering products built by more than 50 teams across the company. In November 2015 Google took the bold step of open-sourcing TensorFlow under the Apache 2.0 license, a highly permissive license that has helped develop a strong community around TensorFlow during 2016.

Runner-up Winners: Workplace by Facebook, SAP’s Digital Boardroom

Why did they win?: This October saw the general availability of Workplace by Facebook, the enterprise social network formerly known as Facebook at Work during its one-year beta program. The application seems off to a very strong start, with more than 1,000 companies using it at launch and more tellingly, with dozens of them having Workplace by Facebook deployments of 10,000 users or more. While enterprise social networks are nothing new, Facebook may have breathed fresh life into the space with a combination of its strong brand and true product innovation.

SAP’s Digital Boardroom provides a real-time analytics portal for the C-suite that leverages line-of-business data from SAP enterprise applications. While it’s still early days for the product, Constellation receives a great deal of interest from clients regarding it.

Best Enterprise CEO

Winner: Satya Nadella, Microsoft

Why did he win?: It’s been going on three years since Nadella was named CEO of Microsoft, succeeding Steve Ballmer. Nadella has consistently put his mark on Microsoft’s culture, such as by his embrace of open source, by bringing the worlds of Dynamics and Office closer together, and aggressive investment in Azure, while continuing to foster Redmond’s vast and crucial developer community.

Runner-up Winners: Jeff Immelt, General Electric; Bill McDermott, SAP; Shantanu Narayen, Adobe

Why did they win?: Immelt is presiding over GE during a time of massive digital transformation at the company. GE is a leading voice for industrial Internet and IoT and that should continue for a very long time.

McDermott overcame the loss of an eye in 2015 and has successfully continued to navigate SAP’s often choppy political waters as its first American sole CEO, while leading the company through a major platform and business model shift.

Under Narayen’s leadership, Adobe has managed to reinvent itself. No more is it just ‘the PDF company’ or a toolmaker for creative workers. In the past several years, Adobe has shifted to a cloud subscription model while greatly expanding its play in marketing and analytics, a move that ties back to its standby creative products in a natural way.

Biggest Tech Flops of 2016

Along with the successes, this year saw quite a few high-profile disasters in the tech industry, so we had to pick winners—er, losers?—here as well.

Winner: Samung’s Galaxy Note 7 mess

Why did it win?: This one was a pretty simple pick. Samsung conducted a global recall of its Galaxy Note 7 phone after it emerged that some units’ batteries were exploding. In October, Samsung discontinued the device entirely and by one estimate, the recall and resulting fallout cost the company $17 billion. However, while the trouble was limited to Samsung, every company with products that use lithium ion batteries was likely both worried and empathetic about the situation.

Runner-up winners: Dyn DDoS attack/Mirai botnet; Massive hacks at Yahoo revealed; Microsoft chatbot Tay develops a potty mouth

Hackers managed to hijack thousands of consumer IoT devices in October to run a massive DDoS (distributed denial of service) attack on Dyn, a company that provides Internet infrastructure services to some of the world’s most popular websites. It later emerged that the attack was executed with a botnet called Mirai, which ended up being released as open source code. Overall, this was the sort of mess that has every sign of happening again in 2017 as broad IoT security best practices remain a distant dream.

Embattled Yahoo thought it had found a buyer in Verizon in July, but recent disclosures that some 1.5 billion users had been hacked in two separate attacks are threatening to spike the deal altogether at this writing.

Finally, Microsoft’s self-learning chatbot Tay ended up getting the wrong kind of lessons from Twitter users after it was plugged into the service in March. Twitter users quickly figured out how to train Tay to spout all manner of profane, insulting and racist remarks. Microsoft ended up pulling Tay off the Internet but recently unveiled a new one named Zo.

Agree with these picks? Disagree? Send me a note at [email protected] and your responses will be included in a follow-up post.

 

Data to Decisions Future of Work Marketing Transformation New C-Suite Chief Customer Officer Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Digital Officer

MapR has platform ambitions - now it has to deliver

MapR has platform ambitions - now it has to deliver

Media Name: 4xv3lqnanyc-joe-gardner.jpeg
We had the opportunity to attend MapR’s first ever analyst event, held December 12th and 13th in San Jose. The event was well attended for a first analyst event, as to be expected when one of the three key Hadoop distribution vendors wants to update influencers. 
Here is the 1-2 slide condensation (if the slide doesn’t show up, check here):

 


Want to read on? 

Here you go: Always tough to pick the takeaways – but here are my Top 3:

MapR is an enterprise software company - MapR execs mad it very clear to the analysts present – that MapR is an enterprise software company, meaning the main vehicle of revenue are software licenses – not services. Given the recent addition of Oracle veteran Matt Mills and more – not a surprising direction. And a valid differentiation to the more services oriented 2 other key competitors.


Different product DNA – In the early days someone at MapR made the decision that a distributed storage architecture is the better direction. What MapR calls today the ‘MapR converged platform’ is indeed different from the 2 key competitors in the field. And MapR can address several differentiators here, around scale, high availability, TCO and support of multi-cloud.

Platform for NextGen Apps – MapR sees themselves as an ideal platform for next generation applications. High Performance and the above-mentioned capabilities make MapR an ideal platform to build large, BigData applications. See below Storify for the examples mentioned, of course IoT and self-driving vehicles were part of the what MapR customers are building.

 

MyPOV

A good event for MapR, always good to formalize the briefings with the analyst community. MapR has several DNA differences to the two other ‘musketeers of Bigdata’ – Cloudera and Hortonworks and made them very clear during the presentations. And enterprise need platforms to build next generation applications. Avoiding IaaS storage lock in is high on the agenda, and when it comes at the price of BigData vendor lock in – may still be the better tradeoff for many enterprises.

On the concern side MapR wants to be an enterprise software vendor, but shared less of a roadmap than the two other players in their field at their respective events. And MapR now needs to execute in go to market and customer adoption. We heard many interesting and light house class customer stories – but they must go live and evangelize the solution more. Investment in more go to market capacity is under way – a good move.

Overall MapR definitively is part of the top 3 independent BigData / Hadoop vendors, it has substantial differentiation at the core product level – now it needs to show the growth and that customers really care. Stay tuned.


Want to learn more? Checkout the Storify collection below (if it doesn’t show up – check here).


 

     

 

Tech Optimization Chief Information Officer

7 Predictions on 2017 Enterprise Technology

7 Predictions on 2017 Enterprise Technology

The growth of enterprise technology continues to fascinate us because of the sheer potential of its scalability and power to transform businesses. When we look at what’s on the horizon with both technology startups and big businesses, the possibilities seem limitless. 

Let us know what you think of our predictions on enterprise tech in 2017. Are we on track or deluding ourselves? Maybe it's too early to tell.  Reach out to any of our Constellation analysts in the Constellation Executive Network app for primers or to discuss the progress that we’re seeing globally in multiple industries, including manufacturing, retail, healthcare, and financial services. 

Hot: AI, microservices, containers, wearables, robotics, virtual/augmented reality and blockchain. 
Not: Social, mobile, cloud, big data. These areas aren't dead, they're simply assumed.

To get caught up on what others are saying about what matters most, here's a reading list of 2017 technology trends that will undoubtedly help to shape, define, and influence the coming years. Which predictions look the most likely to you? Tweet @ChrisKanaracus, our Managing Editor of Constellation Insights with your views, and he'll factor it into his analyses of disruptive technology breaking news.

1) 9 Tech Trends That Will Make Billions of Dollars Starting in 2017 - Business Insider 

2) 8 Tech Startup Trends to Watch in 2017 - CIO

3) 12 Tech Trends That Will Shape Our Lives in 2017 - Fast Co Design

4) 2017 Predictions For AI, Big Data, IoT, Cybersecurity, And Jobs From Senior Tech Executives - Forbes

5) Tech Forecast 2017: 5 Key Technologies to Double Down on NowNetwork World 

 

Constellation Executive Network

Innovation & Product-led Growth Tech Optimization Data to Decisions Future of Work Digital Safety, Privacy & Cybersecurity Distillation Aftershots Next-Generation Customer Experience Matrix Commerce New C-Suite AI ML Machine Learning LLMs Agentic AI Generative AI Robotics Analytics Automation Cloud SaaS PaaS IaaS Quantum Computing Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service developer Metaverse VR Healthcare Supply Chain Leadership B2B B2C CX EX Employee Experience HR HCM business Marketing Growth eCommerce finance Social Customer Service Content Management M&A Chief Information Officer Chief Technology Officer Chief Data Officer Chief Digital Officer Chief Analytics Officer Chief AI Officer Chief Information Security Officer Chief Product Officer Chief Financial Officer Chief Operating Officer Chief Executive Officer Chief Experience Officer Chief Marketing Officer Chief Revenue Officer

MapR Ambition: Next-Generation Application Platform

MapR Ambition: Next-Generation Application Platform

Media Name: 4xv3lqnanyc-joe-gardner.jpeg

MapR promises a more scaleable, reliable, real-time-capable and converged alternative to Hadoop, NoSQL databases and Kafka combined. Are companies buying it?

MapR is frequently mentioned in the same breath with Hadoop vendors Cloudera and Hortonworks, but maybe it’s time to stop thinking of them as competitors. Indeed, over the last eighteen months, MapR has added ambitious NoSQL database and streaming capabilities to what the company now calls its MapR Converged Data Platform.

The differences between MapR and its erstwhile competitors were underscored at MapR’s first ever analyst day, December 13, at its headquarters in San Jose, CA. Executives not only contrasted MapR’s platform with Hadoop, it also detailed advantages verses NoSQL databases Cassandra, HBase and MongoDB, and an open source staple of streaming applications, Apache Kafka. They bemoaned the “complexity” and “chaos” of multi-project open-source deployments, and MapR CEO Matt Mills, a 20-year Oracle veteran, proudly declared MapR to be “a commercial enterprise software company.”

@MapR, #MapR16

MapR presents its Converged Data Platform as a more scalable, reliable and performant
alternative to Hadoop, NoSQL databases and other big data tools.

It’s not that MapR doesn’t exploit open source innovation. The MapR platform includes components of Hadoop and Spark as well as Drill and Myriad, the last two being projects incubated by MapR and contributed to open source. The platform also relies entirely on industry-standard and open source APIs (a choice the company asserts eliminates the possibility of lock-in), even when MapR has replaced the associated components.

MapR chose from its founding to replace the Hadoop Distributed File System (HDFS) with a POSIX/NFS standard file system, for example, yet developers can still use the HDFS API. The POSIX/NFS choice provided read/write capabilities (verse append-only HDFS), better performance, and a “volumes” data construct for higher scalability and easier data organization and governance.

The early POSIX/NFS choice is now paying dividends as MapR goes after database and streaming rolls. The underpinning technology gives the MapR-DB database consistency, reliability and scalability advantages over HBase, Cassandra and MongoDB, says the company, yet developers can still use the HBase API. And given the breadth of capabilities across the platform (including MapR-DB), MapR cites scalability, data persistence, performance and global deployment advantages over Kafka and complex Lambda architectures (yet developers can use the Kafka API).

MapR hasn’t brought together all these capabilities just to check more boxes. Executives said they’re seeing more and more customers building out next-generation applications. The hallmark of such applications is compound requirements spanning the capabilities of file systems, search, databases and streaming systems. Another trait is the embedding of analytics directly into operational applications to support automated, data-driven actions without human intervention. MapR says its converged platform supports all of these demands with better speed, scale and reliability than you can cobble together with multiple open-source point systems.

MapR shared plenty of examples of customers building out next-gen apps. A Paysafe executive was there to talk about how it detects potentially fraudulent payment transactions within milliseconds so it can stop them before they go through. Rubicon runs a real-time, high-scale online ad exchange that handles peak loads of 5 million queries per second with 300 real-time decisions per ad placement. National Oilwell Varco analyzes sensor data from its oil well drills in real time to optimize production output and support predictive maintenance. And Qualcomm monitors sensors in its semiconductor plants in real time to automate actions that improve manufacturing yields.

The typical MapR customer is experienced with big data deployments, and more than 40% are former Cloudera or Hortonworks customers, according to the company. Given MapR’s commercial approach and emphasis on sophisticated requirements, it’s not the right choice for a big data newby or an open-source zealot. Partner Gustavo De Leon of Cognizant described would-be MapR customers as falling into the second of two classes of big data practitioners he’s seeing. First, there are the companies doing lots of big data proof-of-concept (POC) projects and not being terribly productive. Second, there are the companies that are more business focused that a concentrating on specific use cases.

De Leon’s implication was that MapR customers “want to know that they can take POCs into production and that the application will be enterprise ready and capable when they’re done.”

MyPOV on MapR Converged Data Platform

MapR’s foray into NoSQL and steaming opportunities is ambitions but the vision to serve converging requirements and high-performance demands isn’t new to the company. It has been the company’s focus and direction for years. What was new at the analyst day was hearing the vision directly from top brass along with forward-looking statements about the roadmap, investment plans and a possible future initial public offering. What was somewhat surprising was hearing quite the degree of open-source bashing, though I am hearing growing impatience from big data practitioners about the complexity of deploying and managing dozens of separate open source projects.

It was a good first-time analyst event for MapR, but the company was a bit stingy with company measures and plans. The roadmap was more like a set of themes with no precise dates attached. I also would have liked to hear from more customers, including non-OEM customers who don’t have an interest in promoting their own business. MapR has a solid list of high-profile customers, but it’s understandably hard to get an executive from an American Express, Audi, Novartis or United Healthcare to come speak at a tiny insider event in mid December.

Given MapR’s comparatively small size (which it doesn’t disclose but is likely somewhere between $100 million and $200 million), I would have liked to have heard a more nuanced, flexible positioning in the “land-and-expand” or “we can work with incumbent tools or replace them” vein. Instead we heard the hard-sell “we can do it all and do it better than all those other [popular and widely used] tools out there.” I’m guessing that in real-world sales situations there are plenty of developers and influencers predisposed to popular open source choices. I’m also guessing MapR has an easier time making a case for its converged story once it’s established inside a company. And no doubt it gets the nod first as a big data analytics platform, and not as a stand-alone NoSQL database or streaming choice.

I completely agree with MapR that people have to stop thinking of analytics only as reports, data visualizations and other types of human interactions and start thinking more about embedding analytics into transactional applications as automated triggers and actions. At the very least it should be alerts for exception conditions. As companies move toward these sorts of sophisticated, next-gen applications, MapR will have a better and better shot at being part of the conversation.

Related Reading:
Strata + Hadoop World Highlights Long-Term Bets on Cloud
Hadoop Hits 10 Years: Growing Up Fast
Strata + Hadoop World Report: Spark, Real-Time In the Spotlight?


Data to Decisions Tech Optimization AI ML Machine Learning LLMs Agentic AI Generative AI Robotics Analytics Automation Cloud SaaS PaaS IaaS Quantum Computing Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service developer Metaverse VR Healthcare Supply Chain Leadership business Marketing finance Customer Service Content Management Chief Customer Officer Chief Information Officer Chief Digital Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

How to Deliver Great Customer Experience

How to Deliver Great Customer Experience

Dr. Natalie Petouhoff, VP and Principal Analyst, Constellation Research, Inc. describes rising customer expectations and the importance of offering good customer service to create optimal customer experiences.

Next-Generation Customer Experience Chief Customer Officer Off <iframe width="560" height="315" src="https://www.youtube.com/embed/mhG8JNd3BiU" frameborder="0" allowfullscreen></iframe>

IoT; The need for Geospatial Integration of ‘Positions’ in Digital Smart Services

IoT; The need for Geospatial Integration of ‘Positions’ in Digital Smart Services

A substantial number of new Consumer Smart Services delivered by a Smart Phone, or in Car Systems, rely on the Position of the user relative to Positioning of a resource, such as a Restaurant, to work. It seems simple, and its translation into full blown IoT commercial systems seems straight forward, but is that really so? Is the representation of key features in an App ‘Map’, or from a Blueprint of a Building, the same as that of a Geospatial map?

The answer is obviously not, but in marking the site of a restaurant on a road for directions, as an example, Google Maps provides all the accuracy required. In comparison, a utility company relies on the details and accuracy that a conventional Geospatial Map provides for the Service Management of its widely deployed Assets.

Industrial Internet and Industrie 4.0 initiatives in creating and deploying Digital Twins is one example of the factors bringing the requirement for good quality ‘Position/Positioning’ into focus. The rational for a digital twin connected to physical versions in the real world is to gain a direct comparison between operating conditions experienced versus the theoretical assumption. As such the operating conditions affecting the physical versions are a critical factor. Geospatial maps provide a great deal of extra information about the surrounding terrain that may well be affecting different physical versions in different ways as well as explaining the deviations from the expected Digital Twin predictions.

Many of the large complex machines in the first wave of Digital Twining are also mobile; Locomotives, Jet Engines, etc., all bring the need to include the data on position. What happens if all the Railroads where to send the position of their Physical locomotives to the Digital Locomotive Twin using their own representational maps or drawings?

The Geospatial industry refers these captures as ‘Scans’, and of course they are usually unique customized ‘representations’ that will need reinterpreting, and as well as being very large data files to have to send. Assuming that the Scans are successfully transferred, and integrated, (not easy), then Position will be documented, but in analyzing the data the contextual Positioning becomes important. Positioning refers to the impact of the environment around the Position, as in the contours of a map detailing if the position is a locomotive working on flat geography, or hills, as one example.

Positioning is as vital for most data processing as Position; even in static machinery it can be the simple relationship to other machines, such as electro-magnetic interference impacting hospital BMI scanners. Reduced voltage supplies and brownouts may need to be tracked, and of course there is the question of Service Engineers access, amongst a host of other positioning factors.

A great deal of the commercial value in a Digital Twin lies in the comparison of the theoretical operational data with the actual operating data experienced by a range of Physical Twins. Understanding the deviations to enhance the design is difficult if there is no Physical Positioning data to provide the context for the eagerly sought ‘real’ data.

The diagram illustrates clearly the difference between a Geospatial map on the left with additional context on Positioning and a Scan representation on the right which contains a great deal, (too much), data of little significance to the Positioning context.

 

The Scan on the right was designed to be of great value to the Rail Company’s own Maintenance Operations showing the necessary detail of the installed Assets in context, or Positioning, to the use that they need to make of the information. Almost all Enterprises will have such Scans as the traditional manner of capturing and storing data in separate operational activities; and, many Chief Engineers will confide that that even internally they find it difficult to relate, or integrate, such Scans together. 

The example below shows, on the left, the representative Scan of the installed system, probably created by the installer, whilst on the right is an example of the kind of blueprint created by the architect. The Service Engineer is expected to integrate in his/her mind the two in order to locate physically the elements that need maintenance. Unless personally familiar with the offices and installation then this is a difficult task to achieve before work can even start. This problem has existed for many years, and has made ‘Service Engineering management’ a prime target for IoT deployment.

Statistics on the maintenance of Building Management Systems suggest that between 10 and 30% of on-site time is spent on finding and gaining access to the item requiring service attention in a Building. The complexity of systems built into a large modern office block, and the separation of installation from maintenance, plus different security key holders for different floors, or zones, all result in a pile of different Scans using different formats, data, and scales.

As IoT is increasingly understood to bring the new working practices across interconnected Business networks of devices and resources, then the sharing and integration of data moves beyond the challenges illustrated at an enterprise level. Add the additional importance of ‘pictorial’ Graph User Interfaces to simplify the presentation of complex data and overall the importance of Geospatial becomes not only clear, but also vital.

Smart City projects with public, and private, participation across a substantial geography, have made the use of Geospatial data for position and positioning a critical success factor.

The Verve Smart City Project in Manchester, backed by the UK government, is an excellent case study. A consortium of 21 partners including many leading Technology Vendors are working to deliver; “CityVerve’s ‘platform of platforms’ treats the city as a living breathing organism by giving it a technology layer that acts as a central nervous system; smartly supporting and connecting independent systems and applications”.

With a focus on four key areas; Health & Social Care, Energy & Environment, Travel & Transport, Culture & Public Realm the amount of data that users, Public, Private and Personal, will create, share, interact is huge, and much of it will require integration into Services. And for Smart City to function in an integrated manner Position and Positioning will lie at the core of much of the activities. The diagram below shows that for Services Integration it will be necessary to not only define the Geospatial location of the building, but also to expand the building using Geospatial data to be a common reference data source for the internal geography.

There are specialist providers of the tools to provide detailed Geospatial mapping at Building level with the addition of context data related to each Asset and locations. The resulting ‘mapping’ provides a rich data capability in a common format that can be shared between Services companies to allow IoT devices to be powerfully augmented with both Geospatial and contextual data. The following diagram is a screen shot taken from the Verve Smart City project illustrating this capability.

Manchester Smart City Project Verve showing IoT Devices locations combined with context data within a building using the capabilities provided by the UK company Asset Mapping

Any type of Data Integration has always led to a substantial effort through various technology alliances to determine standards, and sadly usually the result with be more than one. Fortunately Geospatial is already standardized, and has been for hundreds of years, around the use of Latitude and Longitude with almost the entire surface of the earth mapped in this manner to extraordinary high standards of accuracy. More recently the recognition of the need to use Geospatial data in Technology systems has resulted in the founding of some key enabling organizations.

The Open Geospatial Consortium (OGC), an international voluntary consensus standards organization, originated in 1994. In the OGC, more than 500 commercial, governmental, nonprofit and research organizations worldwide collaborate in a consensus process encouraging development and implementation of open standards for geospatial content and services, sensor web and Internet of ThingsGIS data processing and data sharing.  https://en.wikipedia.org/wiki/Open_Geospatial_Consortium

A further practical aid comes from the OpenSensorHub who offer the following definition of their activities; “OpenSensorHub is a license free, open source software platform for geospatial (FOSS4G) sensors that allows you to easily, rapidly and affordably network sensors into a seamless SensorWeb of real-time, location-aware, interoperable, web accessible services. With OpenSensorHub, these OGC compliant SensorWebs can be enabled across all manner of space-based, airborne, mobile, in situ and terrestrial remote sensors — including your basic mobile device. OpenSensorHub finally makes it possible to integrate location-aware sensors into the geospatial mainstream.”

Google Maps are of course one of the best-known sources to add mapping to a project and they provide a guide to using their Mapping products to assist first time users.  Details can be found in Storage.Googleapis here. For those requiring greater detail and accuracy in features there is GeoMesa, an open source extension built on Apache Accumulo supported on Google Cloud Big Table with access via Apache HBase Api. GeoMesa running on GeoServer fully supports the Open Geospatial Consortium standards, OGC, and further details are here.

‘Mapping’, in the broadest sense of the word to include including Architectural and Engineering drawings, has built over hundreds of years a huge library of documents. Most, created with little thought that they would ever need to be shared outside the confines of those directly working on and familiar with the projected use.

IoT connected smart devices with built in GPS are a significant game changing moment in the use of ‘Mapping’, and the consequences require new thinking as to the role of any documents that incorporates positioning. The tools are available to make use of high quality, high accuracy Geospatial data, and to be able to readily exchange and share the data between Devices and Services. The biggest barrier appears to be the understanding the need to do this!

New C-Suite Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization Chief Digital Officer

Women In Technology: 2016 Study Shows Potential of Women Entrepreneurs

Women In Technology: 2016 Study Shows Potential of Women Entrepreneurs

Media Name: 6299926854a0657d1e1eb.jpg

In 2013, The Muse, in association with Women in Tech, published a report reflecting the huge potential of female entrepreneurs and employees. For example, Fortune 500 companies with at least 3 female directors have on average 53% higher returns on equity, sales and invested capital. This special report gives tangible recommendations that companies can implement to create a positive working environment for women and men to thrive in.

Here’s a sample of the findings of the current report in 2016:

  • There has been a 21% increase in undergraduate women studying computer science, but at the current rate, the US will only be able to fill 29% of computing jobs by 2020.
  • There is a 50% attrition rate amongst women in tech, from entry-level to executive, mainly due to poor work-life integration and environment.
  • In Silicon Valley alone, men are 2.7x more likely to be in a leadership position than women, who are much likelier to get “stalled” in the workplace.
  • Industries outside of technology have employed more women software engineers than the tech industry have.
  • Amongst startups, 38% of new businesses are started by women, but only between 2-6% of those founders receive venture capital.

Having been the only American female in my Ph. D. program in engineering, it certainly is encouraging to see more women in the tech business. Both men and women bring unique and special qualities to the workplace. I look forward to the future and helping to affect change in a positive way.

@DrNatalie Petouhoff, VP and Principal Analyst, www.Constellationr.com

Future of Work Chief Executive Officer Chief People Officer

Why A Bi-Modal Approach to Digital Transformation Is Just Stupid

Why A Bi-Modal Approach to Digital Transformation Is Just Stupid

Multi-Modal Approach Key to Successful Digital Transformation

Like fake news, the over hyped, bi-modal approach to IT and digital transformation is a flawed fallacy perpetuated by ivory tower, non-pragmatic legacy research firms.  Lessons learned from successful digital transformation projects emphasize an organizational design comprised of six key virtual or physical teams (see Figure 1):

Figure 1.  The Six Components To Successful Digital Transformation Governance

rwang0-the-future-is-multimodal-digitaltransformation

  • Sustaining operations keep the lights on.  The bulk of an organization focuses on keeping the lights on.  This team’s goal is to deliver operational efficiency, rock-solid reliability, and massive economies of scale.  Key team traits include an attention to detail, strong work ethic, adherence to standards and rules.
  • Incremental innovation teams improve existing business models.  These teams have a mandate of innovating faster, better, and cheaper capabilities to existing business models.  Key team traits include domain expertise, a passion for improvement, an understanding of existing constraints, and spirit of innovation.
  • Transformational innovation teams innovate with new business models.  Often seen as the tiger team, these folks explore additional business models for pilot inside the organizations.  Key team traits include a pension for disruption, disregard for existing rules, passion for innovation, and ability to deal with abstract concepts.
  • Concept to commercialization team enables monetization.  This team must figure out how to take a proven concept from the transformational innovation team and incorporate the new business model in existing systems.  This team often comprises a multi-disciplinary group of sustaining operations, incremental innovation, and transformational innovation members.  Key traits include massive creativity, disruptive thinking, political savvyness, and understanding of human behavior and rewards.
  • Culture team infuses harmony among the teams.  This team sets the cultural norms among each of the teams.  The team must not only highlight the differences of the teams, but also find bridges among the differences to inspire innovation.  Constellation defines design thinking as unlocking solutions to questions that have not been asked.  This requires a diversity of thought across multiple disciplines.  In fact, an artist, architect, author, and accountant have different points of view that unlock innovation in problem solving and design.
  • Governance ensures overall organizational alignment and success.  This team must provide the ground rules and framework to ensure successful coordination among a variety of business objectives.  In some cases, this team sets up the partnership ecosystems for co-innovation and co-creation.  Key traits include policy making experience, program management, compensation design, and

The Bottom Line: Digital Transformation Is An Ongoing Journey, Not A One-Off Project

Digital transformation rocketed to the top of mind for brands, enterprises and organizations in 2016.  The fear of being disrupted by non-traditional competitors, margin pressure from competitors, and realization that digital was more than just technologies, gave boardrooms and CXO’s the political capital to invest in digital transformation projects.  As investment increased in digital transformation, leaders realized that these projects were more than just one time initiatives. In fact, organizations learned that digital transformation projects were continuous efforts that required more than a tiger team and bi-modal approach for success.  Be on the look out for Constellation’s latest report that shares insights from 2016 from clients, advisory work, and research, so that leaders can succeed in 2017.

Your POV.

Are you ready to begin your digital transformation journey?  Do you have the right governance?  Would you like to join a network of other early adopters?  Learn how non-digital organizations can disrupt digital businesses in the best-selling Harvard Business Review Press book Disrupting Digital. 

Add your comments to the blog or reach me via email: R (at) ConstellationR (dot) com or R (at) SoftwareInsider (dot) org.

Please let us know if you need help with your Digital Business transformation efforts. Here’s how we can assist:

  • Developing your digital business strategy
  • Connecting with other pioneers
  • Sharing best practices
  • Vendor selection
  • Implementation partner selection
  • Providing contract negotiations and software licensing support
  • Demystifying software licensing
Next-Generation Customer Experience Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Digital Safety, Privacy & Cybersecurity adobe amazon ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Leadership Chief Customer Officer Chief Executive Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer Chief Experience Officer

AWS' new data centers in Canada

AWS' new data centers in Canada

Last week Amazon’s AWS division made official what was already word on the street at the AWS reinvent event the week before that: AWS’ expansion into Canada. 

 
 

Worth dissecting the press release in our customary style – it can be found here:
Cloud pioneer expands global infrastructure footprint with new AWS Canada (Central) Region in Montreal, Quebec, enabling customers to run applications and store data in Canada

MyPoV -Describes what is happening. And what’s in a name? This is the first AWS region in Canada – and it is the central one.. if the US is of any guidance – there will be likely an East and West region(s) too… always good to leave room in the name space.
With tens of thousands of active AWS Customers operating in Canada, those welcoming the new AWS Region include National Bank of Canada, Salesforce, Lululemon Athletica, Desire2Learn, The Globe and Mail, The Toronto Star, the International Civil Aviation Organization (ICAO), British Columbia Hydro, TMX Group, the University of Alberta, Shaw Communications, and Kik Interactive

MyPOV – Nothing is better for an IaaS provider to bring existing load to a new set of data centers. The list will get more attention later in the press release.
SEATTLE--(BUSINESS WIRE)--Dec. 8, 2016-- (NASDAQ:AMZN) – Amazon Web Services, Inc. (AWS), an Amazon.com company, today announced the launch of the AWS Canada (Central) Region. With this launch, AWS now provides 40 Availability Zones across 15 technology infrastructure regions globally, with another seven Availability Zones and three regions in the UK, France, and China expected to come online in the coming months. Tens of thousands of Canadian customers are using other AWS Regions and starting today, developers, start-ups, and enterprises, as well as government, education, and non-profit organizations can leverage the AWS Cloud to run their applications and store their data on infrastructure in Canada. Developers can sign-up and get started today at: http://aws.amazon.com.

MyPOV – Good run down of AWS availability zones and regions. One of the interesting aspects to watch will be how fast Canadian customers will be moving over to the new Canada Central region. Also, an insight on the ration of availability zones to regions was clearly two availability zones per region a few years back… except the huge US East region… with the expansion AWS will be at 15 regions and 40 availability zones … so a few will only have 2 availability zones. I asked AWS datacenter Cameron at reinvent if 3 zones I the new best practice… and he confirmed it with a diplomatic – three is better than two.
 
The AWS Canada (Central) Region offers two Availability Zones at launch. AWS Regions are comprised of Availability Zones, which refer to technology infrastructure in separate and distinct geographic locations with enough distance to significantly reduce the risk of a single event impacting availability, yet near enough for business continuity applications that require rapid failover. Each Availability Zone has independent power, cooling, physical security, and is connected via redundant, ultra-low-latency networks. AWS customers focused on high availability can architect their applications to run in multiple Availability Zones to achieve even higher fault-tolerance. AWS also provides two Amazon CloudFront edge locations in Toronto and Montreal for customers looking to deliver websites, applications, and content to Canadian end users with low latency. These locations are part of AWS’s existing network of 68 edge sites across North and South America, Europe, Asia, and Australia.

MyPOV – Good, so we know that AWS Canada Central starts out with two availability zones. Interesting edge locations are mentioned for Toronto and Montreal – but not for Western Canada. Potentially network wiring and latency are better for Western Canada locations (e.g. Vancouver, Calgary) to be services from US West in Oregon… that would be good news from a performance perspective – but probably not from a Canadian data privacy and data residency perspective. But more on that later.
 
The new AWS Canada (Central) Region continues the company’s focus on delivering cloud technologies to customers in an environmentally friendly way. AWS data centers in Canada will draw from a regional electricity grid that is 99 percent powered by hydropower. More information on AWS sustainability efforts can be found at https://aws.amazon.com/about-aws/sustainability.

MyPOV – Good to see AWS progress in becoming sustainable, it is of course easier when you have an electric grid at disposal this is powered by clean hydroelectric power, like Canada’s.
 
“For many years, we’ve had an enthusiastic base of customers in Canada choosing the AWS Cloud because it has more functionality than other cloud platforms, an extensive APN Partner and customer ecosystem, as well as unmatched maturity, security, and performance,” said Andy Jassy, CEO, AWS. “Our Canadian customers and APN Partners asked us to build AWS infrastructure in Canada, so they can run their mission-critical workloads and store sensitive data on AWS infrastructure located in Canada. A local AWS Region will serve as the foundation for new cloud initiatives in Canada that can transform business, customer experiences, and enhance the local economy.”

MyPOV – Good quote by Jassy, addressing well all the business opportunity the cloud has, and AWS now wants a local part of that business in Canada. “
 
The digital economy is now the economy itself. Virtually every sector of the economy is propelled by digital technologies, which are being enabled by cloud computing,” said Navdeep Singh Bains, Minister of Innovation, Science, and Economic Development in Canada. “The rapidly growing demand for digital services is one reason for the significant investment that Amazon Web Services is making in Canada. On behalf of the Government of Canada, I congratulate Amazon on the success of its cloud business and welcome the expansion of Amazon Web Services in this country.”

MyPOV – Can’t remember seeing a cabinet level secretary on a press release, but makes clear that this was an important business decision for the Canadian government.
 
“Significant projects like the one being realized by Amazon Web Services represent the kind of large-scale investment that take Quebec a long way toward its goals in the digital world. Indeed, this initiative will stimulate the development of cloud computing in Quebec, a key area that can be an engine for our province's information technology and communication sector,” declared Dominique Anglade, Minister of the Economy, Science, and Innovation in Quebec, and Minister responsible for the Digital Strategy.

MyPOV – And yes, we are in Canada, so the statement from the Quebec counterpart can’t be missing. Quebec will see immediate economic benefit from the AWS region.
 
All AWS infrastructure regions around the world are designed, built, and regularly audited to meet rigorous compliance standards and provide high levels of security for all AWS customers. These include ISO 27001, SOC 1 (Formerly SAS 70), SOC 2 and SOC 3 Security & Availability, PCI-DSS Level 1 and many more. With AWS, customers are in control of their data and choose the AWS Region(s) where they want their data stored. Data does not move between AWS Regions unless the customer chooses to do so, and AWS provides a variety of options – both from AWS and APN Partners – enabling customers to encrypt their data in motion or at rest if they desire. More information on how customers using AWS can meet their security, data privacy, and compliance requirements can be found at https://aws.amazon.com/security.

MyPOV – This would not be an AWS region opening without security related statements and certifications. So, no surprise not missing here either, as well as the statement in regards of data not flowing across regions…unless the customer decides to do so.
 
Customers and APN Partners Welcome the AWS Canada (Central) Region

For more than a decade, AWS has changed the way organizations acquire technology infrastructure. AWS customers are not required to make any up-front financial or long-term commitments, paying on demand for the IT resources they use rather than incurring large capital expenses. This enables them to scale quickly by adding or shedding resources at any time, accelerate their time to market with innovative applications, and free up limited engineering resources from the undifferentiated heavy lifting of running backend infrastructure—often while significantly improving operational performance, reliability, and security in the process. This has led to more than two million1 active customers using the AWS Cloud each month in over 190 countries around the world.

MyPOV – Good summary on what cloud providers like AWS do. Interesting statistic on 2 million active customers across 190 countries.
 
Salesforce, the Customer Success Platform and world's #1 CRM company, will leverage AWS Cloud infrastructure for a new Canada-based instance for its core services, starting in mid-2017. “Partnering with AWS in Canada will enable us to continue to deliver trusted solutions to our customers in the region with high levels of reliability, performance, and security,” said Richard Eyram, Area Vice President, Salesforce Canada.

MyPOV – SaaS provider load is a prime target for all IaaS providers, especially when opening a new location. The load conformity (different to e.g. single company by company outsourcing deals) makes this a very interesting opportunity in general. More specific to Salesforce, it’s a coup for AWS. This is likely the first region to run Salesforce after both vendors announced their partnership earlier this year (see below). Interestingly it also offers a glimpse into salesforce architecture: Referring to ‘core services’ is ambiguous, the interesting piece her is that e.g. Marketing Cloud, Heruko etc. already run on AWS. Sales Cloud and Service Cloud do not. Does Salesforce have a major tech stack announcement buried in here? We need to get confirmation on what ‘core services’ means from Salesforce.
 
National Bank of Canada, one of Canada’s leading financial services organizations with over CAD$219 billion in assets, chose the AWS Cloud to help it collect and process a fast-growing volume of stock-market financial data. “The application we were using wasn’t effective. We were only able to answer 10 percent of the questions we wanted to answer. We also couldn’t process historical data, which we needed to do to get more context. The speed and performance of AWS is impressive and data manipulation processes that once took days are now done in one minute,” said Pascal Bergeron, Director of Algorithmic Trading for the bank’s Global Equity Derivatives Group. “We have been able to better serve our customers and have improved and optimized trading operations, therefore generating more revenue for National Bank of Canada.”

MyPOV – Good statement on why national banks, oversight institutions and banks in general move to cloud. The SEC, FINRA (a report presenter) have been on AWS for quite some time. Now Canadian central banks, regulators and commercial banks can do the same – with no data residency challenges.
 
Porter Airlines is an award-winning regional airline headquartered in Toronto that provides flights to over 23 destinations in Canada and the United States. Porter needed the ability to respond instantly to fluctuating load demands on their public site with a scalable and low-cost solution. Porter needed the ability to store large datasets, transform them, and make them available to other applications and end users for analytics and actions. Porter looked to AWS and services like Amazon Redshift to provide the scalable highly available infrastructure required to meet these goals. “Amazon solved a lot of our problems around scale. Specifically, with our data, AWS answered the questions we used to have to figure out ourselves – like how do we scale our massive data store, how do we access it quickly, how do we keep it secure – and they gave us the solution needed,” said Dan Donovan, CIO for Porter Airlines. “So, we now have the time, freedom and confidence to concentrate on how to make our passengers’ experiences better. Using AWS is one of the main ways we do this, and now that AWS has opened a local region in Canada, we can move even more of our systems to the AWS Cloud and put more focus on enhancing passenger experience.

MyPOV – Good airline / transportation showcase with Porter and what enterprises hope from cloud – a scalable, elastic solution for next generation applications.
 
Lululemon is a technical athletic apparel company that makes technical athletic clothes for yoga, running, working out, and most other sweaty pursuits. Based in Vancouver, they started out of a yoga studio and quickly became a global retailer and community hub for encouraging healthy lifestyles and habits. In order to rapidly build and deploy their digital marketing properties for the 2016 holiday season, Lululemon leveraged AWS CloudFormation, AWS Lambda, and AWS ElasticBeanstalk to streamline the management, deployment, and continuous delivery of their application. “Leveraging AWS allows us to spend more time focusing on what truly differentiates us in the market, rather than on maintaining custom infrastructure solutions,” said Sam Keen, Director of Product Architecture. “AWS Services are highly performant and easily choreographed, allowing us to measure deployments in minutes or even seconds. We see competing cloud providers cloning AWS services but we remain with AWS since they are far in the lead and continue to accelerate the release of new services and regions, now with an AWS Region in Canada, which only serves to continue to enhance the value of their offerings.”

MyPOV – Good showcase of an existing AWS Canada being motivated even more with a local region on Canada, and a good brand name, too.

D2L (formerly Desire2Learn), a learning technology leader, recently chose AWS as its strategic cloud infrastructure service provider. “By leveraging built-in AWS Cloud services such as Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3), Amazon CloudFront, Amazon Elasticsearch, and the suite of AWS analytics and security services, D2L is accelerating our innovation and global expansion in a cost-effective way to serve millions of learners,” said Nick Oddson, CTO of D2L. “Serving our customers from AWS locations in the U.S., Europe, Asia, and now Canada, learners everywhere can have an exceptional learning experience on Brightspace, our award-winning LMS. With AWS’s reliability, security, and availability, D2L will continue to provide our high level of service and a global, end-to-end security approach, now on one trusted infrastructure,” said Oddson.

MyPOV – Next category – a global ISV in data, already in multiple regions, eager to get into another won.
 
Sequence Bio is a data-driven biotechnology company in Newfoundland and Labrador. "Sequence Bio hopes to obtain approval to embark on a 100,000 person genome sequencing project in Newfoundland and Labrador that deals with sensitive genomic and health information - having a new AWS Region allows us to build and deploy our platform and keep data 100 percent in Canada," said Dan Brake, Director of Technology Development, Sequence Bio.

MyPOV – Next category- healthcare. Very tricky as Canada has difficult data residency laws (more below) and it is likely a vendor like Sequence Bio can only more to the cloud with a local cloud provider to satisfy healthcare data privacy laws.
 
Postmedia is one of the largest news media companies in Canada with more than 200 brands across multiple print, online, and mobile platforms. “As one of the earliest adopters of cloud computing in Canada, we have utilized AWS for years – and it has delivered on the promise of a powerful, cost-effective, flexible and innovative cloud offering for us,” said Thomas Jankowski, EVP and Chief Digital Officer, Postmedia. “We are excited that AWS is bringing even more capabilities to market in Canada, just in time for our B2B platform build out of the Postmedia Innovation Outpost at Communitech (Waterloo, ON).”

MyPOV – Next category – media and entertainment, an existing AWS customer, happy to bring things home potentially and do more with AWS in Canada.

Investing in Canada’s Cloud Future

The AWS Partner Network (APN) includes tens of thousands of independent software vendors (ISVs) and systems integrators (SIs) around the world, with APN Partner participation in Canada growing significantly over the past 12 months. APN Partners build innovative solutions and services on the AWS Cloud and the APN helps by providing those partners with business, technical, marketing, and go-to-market (GTM) support. APN SIs such as Accenture, Deloitte, Scalar Decisions, TriNimbus, Slalom Consulting, iTMethods, and Softchoice are helping enterprise and public sector customers migrate to AWS, deploy mission-critical applications on AWS, and provide a full range of monitoring, automation, and management services for customers' AWS environments. AWS ISVs in Canada including Salesforce.com, NuData Security, Acquia, Silanis, OpenText, Splunk, Adobe, and NthGen Software will be able to serve their Canadian customers from the AWS Canada (Central) Region. Customers can easily find, trial, deploy, and buy software solutions for the AWS Cloud on the AWS Marketplace.

MyPOV – Partners has been the latest push on the go to market side for AWS… no surprise it is mentioned here – and adds to the importance of the customer list above. And large partners are ready as well.

AWS offers a full range of training and certification programs to help Canadian professionals who are interested in the latest cloud computing technologies, best practices, and architectures, advance their technical skills. Additionally, the AWS Educate program promotes cloud learning in the classroom and has been adopted by more than 500 institutions worldwide. The program helps to provide an academic gateway for the next generation of IT and cloud professionals. The AWS Activate program provides Canadian-based startups with the resources they need to quickly get started on AWS and scale their businesses. AWS has teamed with accelerators, incubators, Seed/VC Funds, and startup-enabling organizations such as FounderFuel, Real Ventures, the Business Development Bank of Canada, iNovia Capital, OMERS Ventures, and others that provide a range of services including training, AWS credits, capital, in-person technical support, and other benefits.

MyPOV – Good to see the AWS education tools available in Canada, too, right from the get go.

 

Overall MyPOV

The land grab for cloud is on and AWS is present in Canada now. That is behind Microsoft and IBM, but before Google. But you don’t always have to be first to go big, and in the announcement AWS has certainly gone big. 12 months ago, e.g. Salesforce was not a partner yet, Workday just announced its partnership and equally picked Canada as its first AWS location. So, we ironically see that the approx. 35M+ Canadians had to wait till 2016 to get an AWS region – sitting on the same continental plate with the US made it easier for providers to serve Canada from the US first.

The other key driver is not just the economic size of Canada, but its relatively complex data privacy laws (PIPEDA), that are not only federal, but can happen at state and vertical level, too. Legislation for healthcare, banks and other highly regulated industries is already complex and will only get more complex for the near future. Opening a local AWS region is to a certain point an overdue move. But then, Canadians are the largest group of people worldwide not to have their own international country access code. The downside of sitting close to the US geographically...

Overall an important day for Canadian enterprise. No more hiding behind data privacy (as we have personally heard in e.g. the Healthcare vertical) for not considering the public cloud, on this case AWS. So, congrats to AWS and time to learn that now AWS has data centRes in Canada, even more specifically deux centres de donnes (ups, US keyboard limits me from putting the accent on…) en Montreal. If that isn’t something, ey?!
 
----------------
Credit to my colleague Alan Lepofsky (his blog is here) on helping with the Canadian localization, much appreciated. 
 
Tech Optimization amazon Chief Information Officer