Results

Ultimate’s UltiConnect Day #1 Keynote - 3 Takeaways

Ultimate’s UltiConnect Day #1 Keynote - 3 Takeaways

We are attending Ultimate’s user conference UltiConnect at the Bellagio in Las Vegas. The conference kicked off with a welcome reception at Drais, a great location to get attendees to enjoy Las Vegas. And with over 2000 attendees and over 60% first time attendees the conference sees record attendance. 

 
 
So here are my Top 3 takeaways from the keynote:
  • People Centricity remains front and central for Ultimate – Ultimate has been stressing people centricity for a long time, and no surprise it was front and center at today’s keynote. CTO Adam Rogers walked us through the three main directions for Ultimate improving people / employee experience: 
    • Don’t waste people’s time – A good direction, see payroll innovation as key deliverable below. 
    • Build stronger leaders – Equally good – Ultimate will help leaders become more effective with the help of suggested actions.
    • Let HR focus on Strategy – Probably the best of all three – as the lack of strategic aspects is moving many HR leaders away from the executive table (and conversations). 
 
  • Innovation to Payroll with PayInsights – During the keynote Martin Hartshorne walked us through the importance of getting payroll right. As we have pointed out before – everything stops when the paycheck isn’t right. For the individual employee who is talking back to HR, and for all of HR and the enterprise when a large employee group is affected. Throughout the next 12 months Ultimate will work on capabilities to help employees to better understand their paychecks as well aid them to interact more efficiently with HR. 
 
  • Ease to do business – Software vendors often get set in their action and processes as they scale their operations, revisiting best practices is a good move and Rogers announced three new initiatives:
    • Tierless Support – Nobody wants to wait for the next level support agent to solve an issue, so it is good to see Ultimate eliminating the tiering of support representatives. This does not raise the experience of the support representatives so it will be interesting to see how Ultimate will solve that. 
    • Learning Center – Ultimate will offer new ways to understand its software, always a good move.
    • New online service experience – The fastest way to solve support is self-service, so it’s good to see that Ultimate is making it easier for customers to resolve support issues directly, and themselves. Customers love empowerment.
 

MyPOV

A good start for Connections, which is not only a customer conference, but also a customer appreciation and Ultimate is striking a good balance between the two. Focusing on people centricity, more specifically on employee experience is a good true north for any HR software vendor, and it is good to see Ultimate build more capabilities into that direction. Coupled with an improvement in know how transfer and customer support, Ultimate is doing the right things to become an even more attractive HCM vendor. Again a good start for the UltiConnect conference, stay tuned. 

More on Ultimate:
  • Event Report - Ultimate Software Connection - People first and an exciting roadmap ahead - read here
  • First Take - Ultimate Software UltiConnect Day #1 Keynote - read here
  • Event Report - Ultimate's UltiConnect - Off to a great start, but the road(map) is long - read here.
  • First Take – 3 Key Takeaways from Ultimate’s UltiConnect Conference Day 1 keynote – read here.
Find more coverage on the Constellation Research website here and checkout my magazine on Flipboard and my YouTube channel here

 
Future of Work Tech Optimization Innovation & Product-led Growth New C-Suite Data to Decisions Next-Generation Customer Experience Revenue & Growth Effectiveness Marketing Transformation Digital Safety, Privacy & Cybersecurity AI Analytics Automation CX EX Employee Experience HCM Machine Learning ML SaaS PaaS Cloud Digital Transformation Enterprise Software Enterprise IT Leadership HR LLMs Agentic AI Generative AI business Marketing IaaS Disruptive Technology Enterprise Acceleration Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief People Officer Chief Customer Officer Chief Human Resources Officer Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

IoT Pilots should include basic security functional elements for experience Mastering IoT security means mastering new security techniques

IoT Pilots should include basic security functional elements for experience Mastering IoT security means mastering new security techniques

Security starts with the identification of risks that in turn defines actions that are required. IoT devices range from simple sensors to embedded intelligence in sophisticated machines, and their deployment covers the whole spectrum of industries, and applications, as such there is not a single standard answer. It would seem unnecessary to consider imposing IT security practices to pilot a handful of simple monitoring sensors in a building, but a pilot should be the opportunity to learn about the technology and security aspects as well as the business benefits.

 

Current risk justification often focuses on the obvious difference in the security risk profiles; using as a simple example Building Management IoT deployment downstream data flows from IoT temperature monitoring points are seen to have low to minimal risks against upstream command responses to activate power, heating or other building functions.

 

But this misses the risk to the Enterprise from each and every IoT sensor as a network access point that could be compromised. Eurecom, French Technology Institute, discovered 38 vulnerabilities in the Firmware of 123 IoT sensing products. Hundreds moving to thousands of IoT connected devices multiplies the risk of security breaches to new levels.

 

Experts believe it likely that many Pilots and initial IoT deployments will occur without an adequate understanding of the security risks, and require expensive retro attention. A blogs cannot provide in depth coverage of the topic, but it is an excellent format to draw attention to the issues and to provide links to more in depth papers. For simplicity, and inline with the popularity of IoT for Building Management, this blog refers to IoT sensor deployment in Buildings as an illustrative use case.

 

Before considering new security capabilities that have been, or are being, developed for the IoT market place, it pays to understand the basic architectural model.  The so-called Final Mile Architecture described in some detail in the blog the importance of using Final Mile Architecture in an IoT Pilot stressed the importance of understanding the use of Connection, Asset Data and Mapping, and Data Flow management. However this blog did not mention the need to consider security aspects, as an example the importance of Firewall protected ‘safe’ location of the IoT Asset Data and Mapping Engine together with the Data Flow Engine.

 

Whilst Network Connection management is understood from its role in IT systems there is very little understanding of the use, and role, of IoT Gateways, Asset Data and Mapping, or Data Flow engines as core building blocks in IoT deployment, let alone how to use each to reducing security risk and vulnerability.

 

Most IoT sensor deployments will make use of one of the specialized physical network types, such as Zigbee, that interconnect low value sensor points and will connect to the main ‘Internet’ through an IoT Gateway. IoT Gateways come in all forms from simple physical interconnection of different physical network media to those with sophisticated intelligent management that introduce security capabilities. Intel publish a good guide to IoT Gateways in general with Cisco offer a useful FAQs on the topic.

 

The choice of an IoT Gateway product for a simple/pilot deployment level tends to focus on the primary physical network function of a Gateway, rarely recognizing that a Gateway is a key access point to an Enterprise, or public network and should be secure.

 

The IoT Gateway coupled with Network Connection management should be considered as the first major security point in IoT architecture. Some IoT Gateways add encryption to traffic forwarded across the network as a further security feature.  Citrix publish a useful guide to the security implications of IoT Gateways. and Intel offer a guide to the implementation of security profiles in IoT Gateways. IoT Gateway physical locations are usually decided by the transmission capabilities of the sensor side network, but the physical location of the next two functional blocks, the Asset Data and Mapping Engine and Flow Data engine, is a critical security consideration.

 

The IoT architectural question relating to where, and how, processing power is related to network architecture was outlined in the blog; IoT Architecture. But the arrangement and physical location of the key functions of Asset Data and Mapping engine and the Data Flow engine in relation to security will be dependent on individual deployment factors. Therefore the following statements are general principals applied to the Building Management example.

As the role and capabilities of an Asset Data and Mapping engine, and Data Flow engine are not well understood it might be desirable to read a previous blog IoT Data Flow Management, the science of getting real value from IoT data. The white paper Data Management for IoT provides more detail in the use of IoT data together with its differences to conventional data. However the best explanation of Asset Data and Mapping with its function in adding context data and location to simple IoT sensor event data comes from watching the Asset Mapping Explainer video on Building Management.

It is good security practice to keep sensor event traffic across the network semi anonymous and not append the critical contextual data that identifies the sensor, location and complete data file from the Asset Data and Mapping engine until securely within the Firewall/Data Center and ready for processing.

Just as few pilot installations appreciate the full role of the IoT Gateway beyond physical functionality, few pilots include the means to manage large numbers of IoT sensors beyond a simple recognizable representative number on a dedicated GUI screen. Good practice will use an IoT Gateway with encryption to ensure that all data traversing the network to the Asset Data and Mapping engine has low vulnerability. After the full data set is appended to the sensor event data by the Asset Data and Mapping engine it becomes an important architectural consideration to limit where on the network this data is accessible.

Similar considerations apply to the Data Flow engine in terms of its location, but also as to its role and use as a part of the IoT security architecture. A Data Flow engine, as its name suggests with its functionally described in the blogs previously referenced, can ensure that not all data is flooded across the entire network.

Cleverly positioned IoT Data Flow engines can control and manage data using elements of the data payload to direct to required destinations. Avoiding all the data being available over the entire network is another basic security good practice in IoT architectural design.

IoT Architecture incorporating basic security elements in its design is a new discipline, and as such really should be incorporated into proving pilots to gain experience in these new functional building blocks before moving to scale deployments.

 As IoT gains momentum and increasingly intelligent devices are interconnected Security becomes an increasingly issue, witness the challenges with Mobile Phones and Tablets today. Developing a full understanding of the all the elements and vulnerabilities requires an effort to master the topic, and the rest of this blog is devoted to providing the necessary links.

The development of both new security risk and protection methodologies and new technology capabilities is under way and there are several different initiatives driving or coordinating efforts that provide interesting details.  

Two good starting points are; 1) The International IoT Security Foundation for a general appreciation of the subject broken down into the various elements and issues in a multipart series. 2) The ambitious OWASP (Open Web Application Security Project) Internet of Things Project describes itself as designed to help manufacturers, developers, and consumers better understand the security issues associated with the Internet of Things, and to enable users in any context to make better security decisions when building, deploying, or assessing IoT technologies. The project looks to define a structure for various IoT sub-projects such as Attack Surface Areas, Testing Guides and Top Vulnerabilities.

A more commercial view comes from WindRiver, an Intel company, whose products are embedded into Intel processors, and from there into other products, in their white paper on Security in the Internet of Things with the interesting sub title ‘Lessons from the past for the connected future’. All these references provide both methods and architectural appreciation of the challenge with solutions using current technology. There are however two new technology approaches, one aiming to authenticate process interactions and the other to authenticate actual processor functions.

BlockChain has suddenly gained a big following for its possibilities in ensuring that ‘chain’ reactions, or interactions, can be tested and established as secure in their outcomes. Though somewhat infamous for its relationship to Bitcoin Internet currency, nevertheless it has much wider applicability in the ‘any to any’ environment of IoT. IBM has built a complete Blockchain demonstrator reported by CIO online under the headline of IBM Proof of Concept for Blockchain powered IoT

PUF standing for Physically Unclonable Function is the technique for using the variations introduced during chip production to be read as a unique ‘signature’ for the chip as part of establishing its authenticity. This unique signature is used to create a unique encrypted checksum reply to an identity challenge enabling several different possible uses. Wikipedia provides a good description of the basic technique and its principle applications.

In conclusion the following quote is taken from the concluding summary of the Telefonica White paper ‘Scope, Scale and Risk as never before’

The networks IoT creates will be some of the biggest the World has ever seen. And that makes them enormously valuable to attackers . . . it is apparent that the Internet of Things is growing far faster and with a higher user knowledge base than its predecessor – The Internet itself. And this raises significant concerns.

What is a pilot today, and a closed IoT network tomorrow, one day will be part of the biggest network the World has ever known so in planning a pilot, or a deployment, it is absolutely necessary to understand the security dimension.

 

New C-Suite

Cisco Spark - On the Road To Success

Cisco Spark - On the Road To Success

Today Cisco announced two very strategic moves geared towards helping the success of their Cisco Spark collaboration platform.  

For those of you unfamiliar with Spark, it's Cisco's latest platform for team communication and collaboration. It brings together chat, voice calls, video meetings and file sharing. After several failed attempts at this space with Cisco Quad and Webex Social, Cisco finally appears to be on the right track in developing a platform and ecosystem that is more in line with what customers are looking for in a simple, cloud based, integrated tool. That said, there is still some confusion and overlap in their platform between Spark, WebEx, Jabber and Tropo. This is also a very competitive market with products like Slack, CoTap, Glip, HipChat, Unify Circuit, Ryver and others, which makes today's announcements event more significant.

Cisco Spark

 

The Success of a Platform Revolves Around Its Ecosystem

From their press release: "We want to make sure all great ideas come to life. We don’t want a lack of funding or support to get in the way. So in partnership with Cisco® Investments we have created a fund to invest $150 million in the Cisco Spark ecosystem. This fund will cover direct investments, joint development, additional enhancements and developer support."

As I mentioned above, the communication and collaboration market is a highly competitive one. It takes a lot to differentiate in this space, as most products have very similar features. Investing $150M to have 3rd party developers extend and enhance the functionality of Spark shows serious commitment from Cisco. Software vendors like Microsoft, Google, Salesforce and IBM already have large partner ecosystems, and recently Slack announced an $80M fund for developers. In a pre-briefing before today's announcement I joked with Cisco that they must have tried hard to get $160M so they could say they doubled Slack's deal.

With so many options available to developers today, it's vital that vendors build strong and trusting relationships with their partners. They must offer them training, support, financing, marketing, and more. While today's announcement is a great first step, the true measurements with come in 3, 6, 12 months when we see how this fund has been leveraged and what solutions have been created because of it. I hope Cisco is open with this information and shares several success stories. 

For more information visit the Cisco Spark Developer Fund website.

 

Build and Buy

Cisco also announced today the acquisition of search company Synata. I've been looking at this company since early last year, as they claim to help solve a problem I've been very vocal about around "social collaboration", this struggle of information and input overload. While everyone likes to complain about their overflowing mail inboxes, the reality for most people is that social tools can quickly become even more overwhelming and unmanageable that email ever was. Yes, sharing information is certainly better than it being locked away. However with departments, teams, companies even entire communities sharing information, finding the right people and content can quickly become a daunting task. Vendors such as Microsoft (with Delve), IBM (via Watson), Google (with Google Now) and Salesforce (via SalesforceIQ) have been focused on not just helping people "search" for information, but instead helping them discover things related to the context of tasks they are working on. The acquisition of Synata signals Cisco's start down a similar path to helping people connect with the content they need to help them get their work done.

At the higher level, today's acquisition shows me that Cisco is not trying to build everything on their own. They are willing to invest in the Spark platform and acquire companies that fill gaps in the platform. They have done this in the past, with various degrees of success. For example: 

  • June 2014, Cisco acquired a Kollaborate.io (Assemblage), one of the early vendors that was creating "digital canvases" where information from multiple tools could be "assembled" on a single screen where people could comment on and share the information
  • Dec 2013, Cisco acquired Collaborate.com, a social task management vendor. Spark is still lacking in task management features.
  • Aug 2011, Cisco acquired Versly, Microsoft Office document creating, viewing and sharing

I hope that today's acquisition of Synata manifests itself in Spark more successfully then these acquisitions did in WebEx Social.

 

Leverage Your Base

While acquiring new customers is always the goal for software vendors, I think it is important for Cisco to focus on enhancing the collaboration experiences of their existing customers. I've been on thousands of WebEx meetings over the years. Not once have I been part of a collaborative process before the meeting, then had that information integrated during the meeting, then had the content, conversations, follow-ups, etc, from the meeting persist after the meeting ended. I hope to see Cisco use Spark to create a highly collaborative experience before, during and after WebEx meetings, perhaps merging the two one day.  Why have both?  I had hoped to Citrix would do this with GoToMeeting and Podio, but they did not. I hoped Microsoft would do this with Yammer and Skype (Lync), but they did not. I hoped IBM would do this with Sametime and Connections, but they did not. Let's see what Cisco can do.

 

On The Right Road

In conclusion, both of today's moves show Cisco's commitment to Spark not just as a product, but a platform for developers to build solutions that help people get work done. I applaud them on both the investment fund and the acquisition, moves that validate why I named Cisco Spark one of the 18 Products Shaping the Future of Work. Now Ciscos's next step is to prove success with customer and partner case studies.

 

 

 

 

 

 

 

Future of Work Marketing Transformation Next-Generation Customer Experience Revenue & Growth Effectiveness Data to Decisions Innovation & Product-led Growth New C-Suite Digital Safety, Privacy & Cybersecurity cisco systems Chief Marketing Officer Chief People Officer Chief Revenue Officer

Avaya Unveils Customer Engagement Innovations

Avaya Unveils Customer Engagement Innovations

Customers are More Difficult to Serve Than Ever. Is Your Brand Ready? Avaya, a Contact Center Infrastructure for 15 consecutive years unveiled customer engagement innovations that meet customer expectations where the rules of the game have changed:
  • 90% of people move between different devices[i]
  • 52% of customers are less likely to engage with a company because of a bad mobile experience[ii]
  • 89% of companies will compete primarily based on customer experience – up significantly from the previous two years[iii]
What’s the Big Announcement? At the conference, Enterprise Connect, Avaya spoke at the key innovations that redefine the customer experience battleground and keep companies ahead of the curve, based on technology that supports more than 5 million agents globally. The Avaya Customer Engagement solutions –which can be implemented through a fully hosted or hybrid cloud model to help ease the transition between existing and new technologies for digital business, deliver:
  • A flexible, robust foundation provided by Avaya Aura® and Elite contact center solutions, which continues to lead the industry now as a 100% virtualized, 100% software-based platform that eliminates the need for hardware-based media gateways to perform important call center functions.
  • Software-defined customer engagement that makes it possible to communication-enable sales and service workflows and processes through Avaya Breeze which can significantly enhance the customers’ experience.
  • A full stack, turnkey solution in Avaya Pod Fx (formerly Collaboration Pod), providing  everything needed to run an advanced virtualized contact center (applications compute, storage, networking & management) all racked, stacked, cabled and configured to remove complexity and streamline operations.
  • Maximum evolution flexibility through the new Avaya Secure Delivery offer which provides hosted private cloud communications for security conscious organizations (US).
A Notes from the Executives: Gary Barnett, SVP and GM, Avaya Engagement Solutions, said:  “The competitive battleground has shifted, requiring a new type of solution and means to respond to digital customer behavior. Customer expectations today will not wait for old contact center technology to get its act together. Speed is the new currency for business transformation – businesses need to understand, predict and respond to customer needs in less time than it takes for a spark to burnout. Avaya is the only company that can rapidly elevate the customer end game without the disruption typical of massive technical change.”
 
What Are the benefits of Avaya’s Approach?
  • Customer defined experiences for all channels and devices – traditional voice, web & mobile chat, social, email, WebRTC-enabled, one-click mobile video and calling from any device, guided co-browsing and advanced customer service applications that simply snap-in without massive technical support.
  • 360 degree customer context that makes it easy to map customer journeys across automated and assisted service channels.
  • Minimized effort on behalf of the customer or business to obtain or deliver optimal service by combining analytics and automation.
  • The ability to easily design workflows to create smart customer journeys that easily tap into enterprise CRM systems and bring other data into a single business process.
  • Unparalleled flexibility and scale for today’s multi-modal environments that allow companies to easily adjust to changing demands.
  • Simplified transition to new technology and refocus on core business advancing projects by leveraging a more secure, hosted cloud based delivery.
  •  Customer choice of deployment options, including public, private or hybrid cloud, premises-based and managed services to match business policies and objectives.

What’s Your CEO Going to Do About Customer Service? Is your company ready for the next generation of customer experiences? Customers are more fickle than ever. Who can blame them? Customer Service has not been an important aspect of many brands initiatives – at least they didn’t want to put the money behind it… so it did not met customers expectations. With customer expectations rising even fast in this always-on world, companies need their CEOs to get the message and support customer service.

Does your CEO get customer service yet?

@DrNatalie Petouhoff, VP and Principal Analyst, Constellation Research

Covering Customer-facing Applications That Drive Better Business Results

Share

Next-Generation Customer Experience Chief Customer Officer

Remembering Raymond Tomlinson: Email Trailblazer

Remembering Raymond Tomlinson: Email Trailblazer

Raymond Tomlinson is noted for establishing person-to-person email as we know it. He was the one that allowed people to send emails to users on other computers by choosing the @ symbol. He considered an obscure keyboard character — the @ symbol —  to separate usernames from email hosts. Tomlinson came up with the idea while working for Bolt Beranek and Newman, the software company that developed the Internet precursor ARPANET, in 1971. He passed away died Saturday at age 74.

Tomlinson said in a 2012 profile in Wired, said “I looked at the keyboard, and I thought: ‘What can I choose here that won’t be confused with a username? They were all test messages, and whatever came to hand as I put my fingers on the keyboard is what I would send.”

The innovation earned him a place in the Internet Hall of Fame earlier in 2012.The Hall of Fame wrote when he was inducted, “Tomlinson’s email program brought about a complete revolution, fundamentally changing the way people communicate, including the way businesses, from huge corporations to tiny mom-and-pop shops, operate and the way millions of people shop, bank, and keep in touch with friends and family, whether they are across town or across oceans.”

There have been many articles on email is dead. But from my experience, most people in corporate America still use email. Today, tens of millions of email-enabled devices are in use every day. Email remains the most popular application, with over a billion and a half users spanning the globe and communicating across the traditional barriers of time and space.

What’s your thoughts on email? Is it hear to stay or not?

@DrNatalie Petouhoff, VP and Principal Analyst, Constellation Research
Covering Cloud and IoT That Drive Better Business Results and Awesome Customer Experiences

Share

Next-Generation Customer Experience Chief Customer Officer

Adobe and Google Partner to Create a Better Mobile Experience

Adobe and Google Partner to Create a Better Mobile Experience

What is the mission of Adobe?  To change the world through digital experiences.  The pace of change in how consumers engage with content is accelerating. So their mission is more relevant than it has ever been before. Consumers expect to connect with content immediately no matter where they are or what they are doing. In the personal, always-connected mobile world we now live in, the inability to deliver the right experience to the right person at the right time directly impacts engagement and the consumer relationship.

Why Did Adobe Partner with Google? Adobe has partnered with Google, alongside many others in the technology and publishing industries to support the Accelerated Mobile Pages (AMP) Project and help solve the problems that are adversely affecting the mobile web experience –  namely the slow speed at which content loads and understanding audience engagement with mobile content. AMP will enable content to load instantaneously and provide a better mobile web experience for all.

What will Adobe Analytics Be Able to Measure? It will be able to measure the reach and impact of AMP experiences for publishers, while providing blazing fast speed for mobile web user experiences. For publishers, Adobe Analytics has become a fundamental part of understanding audiences, creating loyal viewers, and monetizing content, and Adobe is committed to making sure that publishers have access to the best data possible. A better mobile user experience means more browsing and content discovery for users, and quality revenue streams for publishers.  Additionally, Adobe Analytics will enable publishers to link up AMPs with their existing web data for deeper cross-channel audience understanding, and integrate this insight directly with the rest of Adobe Marketing Cloud for content and advertising optimization.

Adobe Analytics customers interested in getting started, you can download their How to Guide now: Adobe Analytics for Accelerated Mobile Pages Project

@Drnatalie petouhoff, VP and Analyst, Constellation Research Covering Customer-Facing Application

 

Share

Next-Generation Customer Experience Chief Customer Officer

The Best Cloud Computing Companies And CEOs To Work For In 2016

The Best Cloud Computing Companies And CEOs To Work For In 2016

1

careeer startEmployees would most recommend Zerto, FusionOps, Google, OutSystems, AppDirect, Sumo Logic, Cloudera, HyTrust, Tableau Software and Domo to their friends looking for a cloud computing company to work for in 2016. These and other insights are from an analysis completed today to determine the best cloud computing firms and CEOs to work for this year.

To keep the rankings and analysis completely impartial and fair, the latest Computer Reseller News list, The 100 Coolest Cloud Computing Vendors Of 2016 is the basis of the rankings. Cloud computing companies are among the most competitive there are about salaries, performance and sign-on bonuses and a myriad of perks and benefits. They are also attracting senior management teams that have strong leadership skills, many of whom are striving to create distinctive company cultures. The most popular request from Forbes readers are for recommendations of the best cloud computing companies to work for, and that’s what led to this analysis.

Using the 2016 CRN list as a baseline to compare the Glassdoor.com scores of the (%) of employees who would recommend this company to a friend and (%) of employees who approve of the CEO, the table below is provided. You can find the original data set here. There are many companies listed on the CRN list that doesn’t have than many or any entries on Glassdoor, and they are excluded from the rankings shown below but are in the original data set. If the image below is not visible in your browser, you can view the rankings here.

best cloud computing companies to work for in 2016 large

The highest rated CEOs on Glassdoor as of February 3rd, 2016 include the following:

  • Ziv Kedem, Zerto, 100%
  • Gary Meyers, FusionOps, 100%
  • Christian Chabot, Tableau Software, 100%
  • John Burton, Nintex, 100%
  • Rob Mee, Pivotal, 100%
  • Rajiv Gupta, Skyhigh Networks, 100%
  • Ken Shaw Jr., Infrascale, 100%
  • John Dillon, Engine Yard, 100%
  • Ramin Sayar, Sumo Logic, 99%
  • Sundar Pichai, Google, 98%
  • Lew Cirne, New Relic, 97%
  • Daniel Saks, AppDirect, 96%
  • James M. Whitehurst, Red Hat, 96%
  • Marc Benioff, Salesforce, 96%
  • Tom Kemp, Centrify, 95%
  • Jeremy Roche, FinancialForce, 95% 
Tech Optimization Chief Customer Officer

The Privacy Shield - another blunt weapon

The Privacy Shield - another blunt weapon

For many years, American businesses have enjoyed a bit of special treatment under European data privacy laws. The so-called "Safe Harbor" arrangement was negotiated by the Federal Communications Commission (FCC) so that companies could self-declare broad compliance with data security rules. Normally organisations are not permitted to move Personally Identifiable Information (PII) about Europeans beyond the EU unless the destination has equivalent privacy measures in place. The "Safe Harbor" arrangement was widely derided by privacy advocates outside the USA, and for some years had been questioned by the more activist regulators in Europe. And so it seemed inevitable that the arrangement was finally annulled, last October.

With the threat of most personal data flows from Europe into America being halted, US and EU trade officials have worked overtime for five months to strike a new deal. Today (January 29) the US Department of Commerce announced the "EU-US Privacy Shield".

The Privacy Shield is good news for commerce of course. But I hope that in the excitement, American businesses don't lose sight of the broader sweeper of privacy law. Even better would be to look beyond compliance, and take the opportunity to rethink privacy, because there is more to it than security and regulatory short cuts.

The Privacy Shield and the earlier Safe Harbor arrangement are really only about satisfying one corner of European data protection laws, namely transborder flows. The transborder data flow rules basically say you must not move personal data from an EU state into a jurisdiction where the privacy protections are weaker than in Europe. Many countries actually have the same sort of laws, including Australia. Normally, as a business, you would have to demonstrate to a European data protection authority (DPA) that your information handling is complying with EU laws, either by situating your data centre in a similar jurisdiction, or by implementing legally binding measures for safeguarding data to EU standards. This is why so many cloud service providers are now building fresh infrastructure in the EU.

But there is more to privacy than security and data centre location. American businesses must not think that just because there is a new get-out-of-jail clause for transborder flows, their privacy obligations are met. Much more important than raw data security are the bedrocks of privacy: Collection Limitation, Usage Limitation, and Transparency.

Basic data privacy laws the world-over require organisations to exercise constraint and openness. That is, Personal Information must not be collected without a real demonstrated need (or without consent); once collected for a primary purpose, Personal Information should not be used for unrelated secondary purposes; and individuals must be given reasonable notice of what personal data is being collected about them, how it is collected, and why. It's worth repeating: general data protection is not unique to the Europe; at last count, over 100 countries around the world had passed similar laws; see Prof Graham Greenleaf's Global Tables of Data Privacy Laws and Bills, January 2015.

Over and above Safe Harbor, American businesses have suffered some major privacy missteps. The Privacy Shield isn't going to make things better.

For instance, Google in 2010 was caught over-collecting personal information throughts it StreetView cars. It's well known (and perfectly acceptable) that mapping companies use the locations of unique WiFi routers for their geolocation databases. Google continuously collects WiFi IDs and coordinates via its StreetView cars. The privacy problem herewas that some of the StreetView cars were also collecting unencrypted WiFi traffic (for "research purposes") whenever they could find it. In a dozen countries around the worl, Google admitted the breach of local laws, apologised, and deleted the collected WiFi contents. The matter was settled in just a few months in places like Korea, Japan and Austraia. But in the US, where there is no general collection limitation privacy rule, Google has been defending this practice. The strongest legislation that seems to apply is wiretap laws but their application to the Internet is complex. And so it's taken years, and the matter is still not resolved.

I don't know why Google doesn't see that a privacy breach in the rest of the world is a privacy breach in the US, and instead of fighting it, agree that the collection of WiFi traffic was unnecessary and wrong.

Other examples of European privacy law being deeper and broader than the Privacy Shield some from social networking. Over the years, many of Facebook's business practices have been found unlawful in the EU. Recently there was the ruling against "Find Friends", which uploads the contact details of third parties without their consent. Before that there was the long running dispute over biometric photo tagging. When Facebook generates tag suggestions, what they're doing is running facial recognition algorithms over photos in their vast store of albums, without the consent of the people in those photos. Identification of people without consent may be an unlawful collection of personal information about them.

In 2012, Facebook was required to shut down their photo tagging in Europe. They have been trying to re-introduce it ever since; whether they are successful or not will have nothing to do with the "Privacy Shield".

The examples cited here are special cases of the collision of Big Data with data privacy, which is one of my special interest areas at Constellation Research. See for example "Big Privacy" Rises to the Challenges of Big Data.

Digital Safety, Privacy & Cybersecurity Distillation Aftershots Security Zero Trust Chief Information Officer Chief Information Security Officer Chief Privacy Officer

Capgemini Collaborates with Celaton on Artificial Intelligence in the Cloud

Capgemini Collaborates with Celaton on Artificial Intelligence in the Cloud

What’s the Partnership Between Capgemini And Celaton Mean to Your Company? Capgemini, consulting, technology and outsourcing services, has announced a new global collaboration with Celaton, a specialist Artificial Intelligence (AI) company, to license and use its inSTREAM, cognitive learning technology. The 3 year contract, signed between Capgemini and Celaton, will extend Capgemini’s already strong automation capabilities, help to drive further efficiencies and add Artificial Intelligence to Capgemini’s Business Services solution portfolio.

What Does Celaton’s inSTREAM Software Do? It streamlines the handling of unstructured unpredictable (and structured) content such as correspondence, claims, complaints and invoices that organizations receive by email, social media, fax and paper. This minimizes the need for human intervention and ensures that only accurate, relevant and structured data enters business systems. Unique to inSTREAM is its ability to learn through the natural consequence of processing information and collaborating with people. Capgemini’s extensive knowledge and experience in business process services will also enable Celaton to accelerate and improve inSTREAM’s capabilities.

What Will The Partnership Provide For Clients? The cooperation will enable Capgemini to increase efficiency, shorten turnaround times and enhance quality in areas where incoming documents and queries need to be processed, improving overall customer satisfaction. At a time when more and more customers expect the use of AI and modern automation tools, the alliance will help Capgemini’s Business Services advance their market leading use of automation and AI for its core business. Earlier this year, Capgemini introduced an Autonomic Platform-as-a-Service (PaaS) offering founded on best of breed technologies to deliver intelligent automation solutions on-demand for enterprises. The Autonomic PaaS aims to improve the predictability of organizations’ operations across their infrastructure, applications and business processes. The Celaton agreement is a further commitment from Capgemini to develop advanced client solutions using intelligent automation, cognitive and AI technologies.

Is This Offered in a SaaS or Cloud Mode? The addition of Celaton inSTREAM expands Capgemini’s Business Services’ extensive Software-as-a-Service (SaaS) portfolio with an artificial intelligence-based processing solution for incoming unstructured content –which is driven by its global automation Centers of Excellence. It is an important element in ensuring the delivery of maximum value to its customers.

Notes From The Executives: Lee Beardmore, VP and Capgemini’s Business Services Chief Technology Officer said, “There is significant industry debate on how cognitive computing and artificial intelligence will impact the BPO market. We are taking our delivery from debate to global implementation and are proud to partner with Celaton as a leading vendor in the business process AI space. Building on the introduction of Capgemini’s Autonomic Platform-as-a-Service, Celaton’s technology extends the penetration of cognitive computing into our delivery of business process services.”

Andrew Anderson, CEO of Celaton said, “I am delighted that Celaton and Capgemini have committed to this global partnership. The transformational impact of AI has been proven with many organizations and yet this emerging technology is often greeted with scepticism. Capgemini’s global reach and credibility will have an impact on the perception and adoption of AI and I’m very excited that Capgemini’s customers will soon be able to realize its significant benefits.”

My POV: AI is very important to the emerging capabilities of company’s to add cognitive computing into the delivery of business processes of discerning unstructured content. And with social and digital content abounding, there is no storage of unstructured content. And there is unlimited potential in the value of this unstructured content if it can be harnessed. This duo will give brands that opportunity.

@DrNatalie Petouhoff, VP and Principal Analyst, Constellation Research

Covering Customer-facing Applications That Drive Better Business Results

Share

Next-Generation Customer Experience Chief Customer Officer

Microsoft - New Hybrid Offerings Deliver Bottomless Capacity for Today's Data Explosion

Microsoft - New Hybrid Offerings Deliver Bottomless Capacity for Today's Data Explosion

Earlier this week, Microsoft announced new hybrid capabilities in its storage and database offerings, which are remarkable in the way the products and offerings are set up and offer the existing Microsoft install base a path to the cloud.

 
 

So let’s take apart Mark Jewett’s blog post in our customary style (it can be found here). 
 
Applications and data are at the heart of how organizations drive competitive value and improve efficiency. However, this digital transformation is resulting in an explosion of data. Enterprises have to figure out how to get a handle on this data – how to increase their storage capacity and keep their data safe and secure, without drastically increasing IT costs.
MyPOV – Good point and in line with what we have been seeing and then saying for a while. All seven of the next generation application use cases that we track across the industries involved an explosion in data, resulting in challenges to both the database and storage tier.
 
Microsoft believes a hybrid cloud approach can offer unique ways to manage this data proliferation. We believe you should be able to take advantage of the best of the public cloud and the best of your on-premises technology. Hybrid solutions should enable mission critical, recent, or latency-sensitive data to remain on-premises, while backups and archival data can seamlessly move to low cost and nearly-limitless cloud storage. Applications and tools can access the data transparently, no matter where it is - so that it’s always available to you. And you can do it all without investing in new infrastructure, saving you time and money to focus on driving innovation.
MyPOV – Good description of the strategy. It offers enterprises an outlet to not expand infrastructure on premises, but, instead, to skip the investment cycles while satisfying the additional demand and load by moving it to the cloud, here Azure. This may be a win-win for customers and vendors; customers cannot necessarily afford to re-architect for cloud but may want to take advantage of new use cases that need to be automated. For vendors like Microsoft, this creates a way to grow cloud revenues substantially.

Microsoft is investing in building hybrid capabilities across our product portfolio to help you take advantage of all that hybrid has to offer, simply and cost effectively. Today, we are extending that commitment with new offerings in SQL Server 2016 and StorSimple that make it even easier for you to leverage a hybrid cloud model to put you in control of how you store and protect your applications and data.
MyPOV – Microsoft is executing a two-pronged approach. It brings its Azure technology stack to on premises with the Azure stack, and it allows customers to keep older Microsoft products operating on premises while extending them to the cloud. We see the latter here for SQL Server 2016 and StorSimple.
Leverage the infinite capacity of Azure with SQL Server Database updates 
This week we are introducing the SQL Server 2016 Release Candidate with new hybrid enhancements available in preview. These capabilities make it easier than ever for you to choose whether you store your data on-premises or in the cloud. These new features integrate hybrid capabilities into the market-leading Microsoft data platform product you use today, empowering you to leverage the cloud to extend capacity for your massive data growth, while ensuring your data is protected.
MyPOV – We have given Microsoft a hard time around SQL Server scalability for a long time (looks like the longest time), but now there is a clear path to extend capacity…
 
SQL Server 2016 with SQL Server Stretch Database service, a new Azure companion service, enables you to dynamically stretch your on-premises warm and cold data to Azure for virtually endless compute capacity and storage. Now you can keep as much data as you need in the cloud, up to 60 terabytes per database in preview, without the high costs of traditional enterprise storage. The Stretch Database service makes remote query processing possible by providing compute and storage in a way that’s completely transparent to the application. SQL Server Stretch Database also works with Always Encrypted technology, which encrypts data before sending to Azure and the encryption key remains on-premises to give you added piece of mind that your data is protected no matter where it’s stored. SQL Server 2016 with the new Stretch Database service enable you to keep more data accessible for deep insights at significantly lower cost.
MyPOV – We learn a new ‘Azure companion service,’ SQL Server Stretch Database, is the enabler on the Azure side. Enterprises will be happy to learn that they can move data to the cloud based on application scenarios and data temperature. It’s very good to see the support of an external (on-premises) local key, addressing the security/privacy challenge in the post- Snowden/ NSA/PRISM age.
 
Another new hybrid capability available in SQL Server 2016 is support for Transactional Replication toAzure SQL Database which expands on the existing option for replicating data to SQL Server in an Azure virtual machine (VM). With this feature you can now replicate data directly to Azure SQL Database and benefit from a fully managed database. This extends the options you have to back up your data to the cloud to ensure it’s protected in worst-case scenarios. You can also migrate data from SQL Server on-premises to Azure SQL Database – providing a simple mechanism to move data to the cloud without downtime to an on-premises database.
MyPOV – SQL Server (to the days back of Sybase) always had very good replication capabilities; it’s good to see these capabilities now put to work for moving/replicating data from on premises to Azure – there, next generation applications can be built. Being able to go from SQL Sever in an Azure VM to an Azure SQL DB is a major step and way forward out of the (older) SQL Sever code (which we have seen as an issue for many years). Giving customers a scalable cloud-based DB outlet is a major step by Microsoft and a truly good move for customers.

Simplifying hybrid storage with Azure StorSimple Virtual Array 
Azure StorSimple is another great example of how Microsoft has increased the hybrid capabilities of its products. Designed to help you increase storage capacity and data availability without investing in new infrastructure, StorSimple offers economical cloud storage or on-premises storage so you can choose where to store your data.
MyPOV – Microsoft had a precursor of the Azure stack on the storage side with the (relatively) newer StorSimple offering. Good to see that the on-premises and cloud StorSimple are now coming together.

Today we are extending the StorSimple offering with StorSimple Virtual Array, a version of StorSimple offered in a virtual machine form, now generally available. The VM enables additional scenarios, in particular environments with minimal IT infrastructure and management, for customers to take advantage of StorSimple. The virtual array is built on the success of existing StorSimple technology, which uses a hybrid cloud storage approach for on-demand capacity scaling in the cloud and cloud-based data protection and disaster recovery. The hybrid approach centers around your choice to store the most used data on the virtual array and optionally tier older data to Azure. The virtual array can be run as a virtual machine on your Hyper-V or VMware ESXi hypervisors and can be configured as a File Server (NAS) or as an iSCSI server. It also provides the ability to back up your data to Azure.
MyPOV – Wait, now it reads more like Microsoft is bringing capabilities of StorSimple from Azure to on premises via virtual machines. That makes management easier for customers who have a number of VMs to manage already. Supporting not only Hyper-V but also VMware ESXi gives customers familiar choices. So a small component of the Azure cloud stack (though Microsoft does not stake it explicitly, so I am speculating a bit here) is coming to on premises.
 
Both SQL Server 2016 and StorSimple enhancements are available for you to try out today. We hope that you’ll test drive these exciting new offerings and let us know what you think.

MyPOV – Sand boxes, pilots, test drives – always a good way to get customers on board and learn about scale.
 

Overall MyPOV

We are seeing an overall industry-wide move by the established IT giants to hybrid. Last week, it was IBM partnering with VMware (see here); this week Cisco acquired CliqR (see here); and Oracle offers to move mainframe storage out of the mainframe and/or to the cloud (see here). As mentioned above, Microsoft announced the Azure stack for on premises (see here), and now important storage and database offerings are ‘stretched’ (pun intended) to the cloud.

It’s a win-win for vendors and customers as customers can avoid discretional investment on premises. Move the investment to building new (cloud based), next generation applications, and vendors can also repurpose R&D investment and grow cloud revenue.

Closer to Microsoft’s announcement, customers need to look at the potential license and operational implications. We are sure Microsoft will get the pricing right, but the operational cost of ‘bursting’ or replicating to the cloud can be expensive, starting on the networking side.

But overall a good move by Microsoft. We will be watching for adoption and further developments – stay tuned.


More about Microsoft:
  • News Analysis - Welcoming the Xamarin team to Microsoft - read here
  • News Analysis - Microsoft announcements at Convergence Barcelona - Office365. Dynamics CRM and Power Apps 
  • News Analysis - Microsoft expands Azure Data Lake to unleash big data productivity - Good move - time to catch up - read here
  • News Analysis - Microsoft and Salesforce Strengthen Strategic Partnership at Dreamforce 2015 - Good for joint customers - read here
  • News Analyis - NetSuite announced Cloud Alliance with Microsoft - read here
  • Event Report - Microsoft Build - Microsoft really wants to make developers' lives easier - read here
  • First Hand with Microsoft Hololens - read here
  • Event Report - Microsoft TechEd - Top 3 Enterprise takeaways - read here
  • First Take - Microsoft discovers data ambience and delivers an organic approach to in memory database - read here
  • Event Report - Microsoft Build - Azure grows and blossoms - enough for enterprises (yet)? Read here.
  • Event Report - Microsoft Build Day 1 Keynote - Top Enterprise Takeaways - read here.
  • Microsoft gets even more serious about devices - acquire Nokia - read here.
  • Microsoft does not need one new CEO - but six - read here.
  • Microsoft makes the cloud a platform play - Or: Azure and her 7 friends - read here.
  • How the Cloud can make the unlikeliest bedfellows - read here.
  • How hard is multi-channel CRM in 2013? - Read here.
  • How hard is it to install Office 365? Or: The harsh reality of customer support - read here.


Find more coverage on the Constellation Research website here and checkout my magazine on Flipboard and my YouTube channel here

 
Tech Optimization Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Microsoft SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer