Results

Dos and Don’ts in Hybrid Cloud Data Warehouse Deployment

Dos and Don’ts in Hybrid Cloud Data Warehouse Deployment

Disaster recovery and development and testing are obvious starting points, but there are many other hybrid-cloud DW use cases as well as pitfalls to avoid.

We’ve already witnessed a seismic shift of mainstream corporate workloads into the cloud, but the movement has been slower to take off where data and analytics are concerned, and with good reason.

Most companies view data as their most valuable asset, so they’ve been more conservative about the digital treasure chest otherwise known as the data warehouse. You can argue all you want about cloud vs. on-premises security, but some businesses and, broadly speaking, some industries just aren’t going to move the bulk of their data into public clouds. In some cases regulatory or data-residency requirements make public-cloud deployment challenging. There’s also the issue of control, with some businesses facing tight service level agreements that demand performance levels that public cloud service providers won’t guarantee.

@Teradata, #cloud, #datawarehousing

Capacity planning and software version consistency are among the key concerns in hybrid
cloud data warehouse deployments.

All of the above are among the reasons some companies choose private-cloud or Hybrid deployment options combining on-premises deployments with private-cloud services. One example is Core Digital Media, a marketing services firm I recently interviewed for this case study report.  Core Digital handles lots of customer data, so it chose a hybrid approach combining its on-premises production system with disaster recovery (DR) running on private-cloud Teradata Database-as-a-Service.

A second Teradata customer I interviewed for the research is currently running DR and development and testing (dev-test) on Teradata DBaaS. But this company does not deal in customer data, so it’s also investigating the public-cloud Teradata Database on AWS set to debut later this month.

As I’ll detail in an in-depth Webinar set for this Thursday, DR and dev-test are typically among the first data warehousing workloads that companies move into the cloud. Other common use cases include unpredictable workloads where you’re not sure of the road ahead and you want to avoid disruption or performance impacts on your production environment. It might be new applications that have emerged from testing and development, but that have yet to prove their business value. It could be fast-growing or compute-intensive workloads that you didn’t foresee in long-range capacity planning. Or it could be spikey workloads that occasionally impact production performance.

Another hybrid use case is using cloud services to handle unique analysis requirements. One of the Teradata customers I spoke to, for example, periodically does data-discovery querying against high-scale, historical data. This querying can impact the performance of their production system, so they’re considering copying data from their Teradata Cloud DR instance into Teradata Cloud for Hadoop for discovery analysis.

No matter what database management system you’re using and whether you’re considering public- or private-cloud database services, register for this week’s Webinar (Thursday at 1 pm ET/10 am PT, but also available on demand) to hear about hybrid deployment use cases in more detail. I’ll also share advice on pitfalls to avoid, such as lack of familiarity with cloud capacities and performance characteristics and the related mistake of over- or under-provisioning. Joining me will be Dominique Jean of Core Digital Media, who will offer a first-hand account of hybrid-deployment dos and don’ts.


Data to Decisions Chief Information Officer

IoT where two, or even three, possibly four, Worlds collide Or Operational Technology meets Information Technology

IoT where two, or even three, possibly four, Worlds collide Or Operational Technology meets Information Technology

The title originates from a chapter heading entitled ‘IoT; Two Worlds collide’ in a recent Telefonica report on IoT Security drawing attention to the technology vendors, as well as enterprise staff, coming from two different backgrounds with very little in common. In reality it’s perhaps slightly worse as there are three distinctively different functional zones in a mature IoT architecture, together with connectivity split between mobility versus more conventional wired, or wireless, networking.

The technology market, and in-house enterprise expertise is spilt between Information Technology operating the business administration, and Industrial Automation, long time users of sensing technology, supporting Operational Technology. Each further subdivides skills as an example, there are separate IT groups for Web/Services, Cloud, and Mobility technology specializations.

The recent Mobile World Congress in Europe devoted considerable amounts of session time, and exhibition space to IoT in contrast to its traditional focus on Mobile telephony. Of course in reality a Smart Phone is an IoT device as many exhibition features were keen to demonstrate, along with Wearable technology and other more specialized 4G IoT sensors and devices. One of the most compelling IoT fully integrated business value demonstrations showed automation of IoT Tagged cows farm management.

Many are surprised to learn that Precision farming as it is called, (see Constellation Blog IoT Market in 2016), continues to be a showcase for business, and sector, transformation through the adoption of IoT. Why doesn’t this register? Almost certainly it is due to the lack of anything in common that would bring it to the attention of IT, and OT, practitioners. Farms are neither office, nor factory based, and by having moving machinery and animals at the center of operations. Accordingly Precision Farming has been driven by Telecom venders providing a Mobility connected Architecture.

Equally, those engaged with Precision Farming would find the Industrial vendors approach to Machine-to-Machine IoT on the factory floor, or in Buildings, equally incomprehensible. However the real problem is the extent to which it is also incomprehensible to IT practitioners. The converse charge can be made in respect of OT practitioners grasping the principles of IT architecture.

The diagram below is at the heart of understanding the title and opening comment concerning multiple worlds colliding. The left and right hand sides are pretty clearly delineated by current technology vendors positioning and products, but it’s the center that is the new zone where much of the new business value around Smart Services will be created.

Its not the possession of IoT data that creates value, its what business valuable action, or outcome that data produces that matters. In each of the three zones in the diagram the type of action and its value is different. An enterprise can benefit  from specific Business value delivered in each of the three zones, but a real ‘transformation’ requires integration across all three.

Industrial Automation companies, focussed on the left hand side of the diagram, include the suppliers of the heating, cooling, lighting, and a mass of utility equipment that is built into a modern multi-story office building. The numbers of sensors that are already being deployed currently number in the hundreds, but its rising fast as cheap battery powered wireless sensors are being added in increasing numbers.

In an example of a new build ‘smart’ forty-floor office in London, the planning expects in excess of 20,000 building sensors producing more than 3 petabytes of data annually. The IT community will see this as the ultimate requirement for Big Data analytical tools, but can they really handle 20,000 individual inputs of a few Kbits each when more than 75% of the traffic will merely confirm the status quo is maintained?

The Industrial Automation vendors working with OT community see a very different picture, with those data flows being used to trigger ‘reflex actions’ such as increasing a selected heating output in reaction to a developing ‘cold spot’. Even more important would be a reaction to a fire alarm releasing fire doors, setting off sprinklers and shutting off power.

This mass of low value building sensors will be interconnected by low capacity, and power, Grid Networks, such as ZigBee, using a master node to distribute the processing tasks. In short there is little, maybe nothing that relates in the Network, Processing, or even data model of IT systems. This is a pure Machine-to-Machine, or M2M, environment. However as machines can’t repair themselves, (though ‘smart’ automation can limit the impact by bypassing failed equipment), there is a need to connect events with ‘Services’ that can initiative Engineers and repairs.

Cloud based Services that support, empower and improve the efficiency of people are the prime functionality and business value delivered by the middle of the three zones. These should not be confused with the current Building Management and Service Engineering applications already provided by IT.

Building Management is a very ‘hot’ market for IoT currently; immediately recognizable is the interest around energy consumption with increasingly expensive and regulatory ‘green’ energy. Energy is typically only 2 to 3% of the overall building management costs, against machinery maintenance running at around 8 to 10%. The biggest reward lies in shifting to preventative maintenance by using IoT sensing to track individual equipment’s operating efficiency to decide when, and what type of optimized action is required, rather than recording planned time based processes in the traditional manner.

Salesforce is recognized for its longtime focus on cloud based ‘Services’ to make customer-facing people more responsive to real time activities and that includes Service Engineers. Over the last year Salesforce have supported IoT sensor event data inputs to their Services, and in the last month SAP has introduced a Preventative Engineering capability linked to their SAP IoT Initiative. Exactly how the integration that links these capabilities, and those of the final zone of traditional IT, with IoT sensors and grid networks has been examined in detail in previous blogs, most recently in the importance of ‘Final Mile’ architecture in pilots.

Both the Industrial Automation and Service Management zones share the common trait that the time to read and respond with a successful outcome is as near ‘real-time’ as possible. Indeed ‘timely optimization’ is the critical element in creating the business value. This is in contrast to traditional IT that is largely based on recording what has happening, and analyzing historic data to find value from identifying trends that have already happened.

The integration with the third zone shown on the right hand side of the diagram, that of traditional IT, is made by methods familiar to IT practitioners around data. As the Data formats and protocols are often different API engines are frequently required. An important point is that this is consolidated data, collected and collated, from the action outcomes made in the first and second zones. Analyzing over a period events that led ‘outcomes’, and ignoring null reports, produces management reporting. Similarly Service contract actions can be captured and recorded with in existing applications.

Though OT is equally failing to appreciate the implications of how to apply analytics at scale to operational Industrial Automation systems the issues facing IT, who are expected to be at the center of an Enterprises use of technology are are more concerning.

Few IT practitioners appreciate the massive numbers of small sensing devices currently being deployed, nor the use of new types of networks and protocols, rather than the IP based networks that IT understands. Add to this the overwhelming volume of extremely small data packets containing no contextual information such as location, or type of sensor, etc. that depart from the expected ‘Big Data’ models.

Unimagined numbers of IoT devices flooding miniscule data packets with no contextual data simply do not suit current big data analytical tools, nor will the traffic be welcomed on critical enterprise networks. Neither can processing be carried out in time frames to s address the key IoT sensing business benefit of optimized real time events and outcomes. Whilst none of these challenges are any more insurmountable than those of past generations of technology innovation waves, but they do require more recognition and understanding of adding and integrating IoT and IoT Smart Services within the enterprise. 

IT should be leading the path towards a new enterprise architecture that will again unite new business and technology capabilities. Just as in the past, when the advent of Client-Server, Web and Mobility technologies all imposed similar changes.

Its time to get beyond the current myopic views of the different elements, or zones, of IoT and start to form enterprise wide working parties to pilot around proven Business beneficial requirements an integrated manner! 

New C-Suite Data to Decisions Future of Work Innovation & Product-led Growth Tech Optimization

Ultimate UltiConnect - Product, Platform and Services Innovations

Ultimate UltiConnect - Product, Platform and Services Innovations

We had the opportunity to attend Ultimate Software’s user conference UltiConnect in Las Vegas, held at the beautiful Bellagio resort. The conference was well attended with over 2000 attendees, a record number for Ultimate. 

 
 
Always tough to pick the top 3 takeaways – but take a look: 
 

No time to watch – then keep on reading:

Product Innovations – Take a look at my Day #1 Keynote takeaways (here), the two that hit the mark with me were
  • Pay Insights – Questions about correct or incorrect paychecks have been always a drag on employee performance and payroll manager efficiency. Tackling the problem with better explanations, more interaction as Ultimate is planning to do for the rest of 2016 is a welcome approach. As the keynotes referred this was inspired (amongst others) by marquee customer Google, but I am sure this will be well received across the Ultimate customer base. And it is one of the areas where we think there is room for innovation in the payroll space.
      
  • Leadership Actions – Helping people leaders with advice, training and actions is probably a plan that will lead to good outcomes. Ultimate plans to provide people leaders with a set of actions that they may want to take to become better people leaders, to get the people they lead more engaged, to solve certain people challenges etc. Almost naturally this cannot be functional with a preset bundle of actions, so it is good to see that Ultimate plans to roll out the feature with configurability from version. 

Analytical Applications Momentum – Ultimate is one of the many vendors that had analytics on their product roadmap for the last 24 months, the good news is that Ultimate provides ‘real’ analytics (those who take an action or make a recommendation – more here) and has made four analytical applications available over the last 12 months. With an uptake of over 600 customers (out of a 3000+ install base) of the earliest analytical application focusing around retention risk, Ultimate probably has the largest customer base adoption for a ‘true’ analytical application. Good to see, though I would have liked to see more true analytics innovation, but on the flipside customer adoption takes time. We will be watching the next quarters.

Platform Innovation – Every vendor that has been around for 5+ years runs into platform innovation challenges. So it is no surprise that Ultimate, in its 26th year has an older and a newer platform. The newer platform being the architecture on which the new Recruiting and Onboarding applications have been built on, the older (more Microsoft centric) platform runs the rest of the products. Ultimate has shared that it has revised it go-to platform recently, and the direction is towards Pivotal’s CloudFoundry PaaS running on OpenStack. Certainly a good update in direction, but time was lost in the process and it is time for Ultimate to build more product and move older products to the new platform. One popular approach to tackle such a transition is to expose APIs for integration purposes and that’s the mission of the new Ultipro Connect product. It is scheduled for 2017, so quite some way out, but interaction with customers is positive. So a lot of new platform coming to Ultimate product development at the moment, 2016 will be key to see its validation and uptake in the customer base, more likely even 2017. 


 

Analyst Tidbits

Ecosystem – The Ultimate ecosystem is doing well and growing fast – what started with 6 partners in 2013 is now standing at 96 partners – and it reads like a who is who of enterprise software with a large chunk of HCM players. Almost 50 exhibited at UltiConnect and from my random both check were all very happy to be there. It’s good to see that Ultimate was able to create an ecosystem in short time, creating options and value for customers.

NetSuite – A year ago both Ultimate and NetSuite surprised us with a partnership. A year later both vendors report back and state they could not be happier. The number of joint customers is a good sign for that. The integration is advanced to the point that the Ultimate employee master replaces the employee master in NetSuite, when Ultimate is present. A single employee master in regards of CRUD operations is supported. Good to see a working partnership, and given that NetSuite has not changed its HCM partner strategy (after 3 pivots in about a year) is also a testament that something is working well. Joint customers are happy with the progress of the partnership as we found out.

Usability concerns – At the last two UltiConnects Ultimate was able to show substantial UI innovation and new user interface paradigms. The newer UI is implemented well in Recruiting, Onboarding and some new functionality, but the older user interface on the more Microsoft centric is showing its age. Ultimate will need to do some work in the future, better sooner than later.

Services Innovation – One of the key announcements of the Day #1 keynote was along a new ‘tierless’ support model. In more detailed briefings we learnt it is really about higher support levels ‘swarming’ the first level support calls, with the goal to drive to faster call resolution. A good move that should get answers to customers faster, and more importantly bring first line service representatives up to speed faster. But Ultimate is investing on the self-service side, too – with a new community offering, where customers can help customers and ultimate is only coming in when needed. Both are good moves to make a customer community successful on the support side.

On the implementation services side Ultimate has been also offering more services, e.g. for change management and around the new analytical product offerings. Good moves that should help customers to what ultimately matters most for both customers and vendors – customer success with the vendor’s solution.

MyPOV

A good UltiConnect conference for Ultimate customers. The vendor is growing and doing well and that positive momentum is always felt at a user conference. And Ultimate knows how to appreciate its customers, who by themselves are passionate about culture, product and vendor. Anyone doubting that – make it to McCarran on Friday after an UltiConnect conference and see how many customers are travelling back in Ultimate conference shirts. A move encouraged by the vendor, but also easily endorsed by attendees, where acceptance of vendor and culture is getting close to cult like settings.

It’s equally good to see that Ultimate is innovating in product, reverting back to the core of what most users leverage, payroll, it is good to see investment here with Payroll Insights. And on the ‘real’ analytics side Ultimate probably sees the largest uptake of it new analytical offerings across vendors, with over 600 customers using the retention predictor. That’s remarkable, as the first wave of adoption of analytical applications is washing along the beach of reality check and day to day proof of value.

On the concern side the vendor needs to accelerate its move to its newer platform(s) to create a common user experience and product set. With CloudFoundry as PaaS platform and OpenStack for IaaS services, Ultimate is more likely than not heading in a good direction. With global expansion underway, Ultimate will also have to expand beyond North American data centers. And the user experience of the older, built on the Microsoft stack applications needs improvement.

But overall a good UltiConnect user conference for Ultimate customers and the vendor, showing traction, innovation and strong indicators to be on track to reach the ‘legend’ (as CEO Scherr puts it) of reaching one billion of revenue in the near future, as it looks 2018.



More on Ultimate:
  • First Take - First Take - Top 3 Takeaways from Ultimate’s UltiConnect Day #1 Keynote - read here
  • Event Report - Ultimate Software Connection - People first and an exciting roadmap ahead - read here
  • First Take - Ultimate Software UltiConnect Day #1 Keynote - read here
  • Event Report - Ultimate's UltiConnect - Off to a great start, but the road(map) is long - read here.
  • First Take – 3 Key Takeaways from Ultimate’s UltiConnect Conference Day 1 keynote – read here.

Find more coverage on the Constellation Research website here and checkout my magazine on Flipboard and my YouTube channel here

And here are my notes - on Twitter - of Ultimate UltiConnect 2016:
Future of Work Tech Optimization Innovation & Product-led Growth New C-Suite Data to Decisions Next-Generation Customer Experience Revenue & Growth Effectiveness Digital Safety, Privacy & Cybersecurity Sales Marketing User Conference AI Analytics Automation CX EX Employee Experience HCM Machine Learning ML SaaS PaaS Cloud Digital Transformation Enterprise Software Enterprise IT Leadership HR LLMs Agentic AI Generative AI business Marketing IaaS Disruptive Technology Enterprise Acceleration Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief People Officer Chief Customer Officer Chief Human Resources Officer Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Hewlett Packard Enterprise Announces New Machine Learning as a Service Offering

Hewlett Packard Enterprise Announces New Machine Learning as a Service Offering

Earlier today, HPE announced the availability of a number of Haven capabilities being available on the cloud (more specifically Microsoft Azure). Given the change in cloud strategy, the partnership with Microsoft for pubic cloud and the need of enterprises to build next generation applications, it’s time to check in what is happening at HPE in general and Haven in specific.

 
 


So let’s dissect the press release in our customary style – it can be found here:
PALO ALTO, Calif., March 10, 2016 – Hewlett Packard Enterprise (NYSE: HPE) today announced the immediate commercial availability of HPE Haven OnDemand, an innovative cloud platform that provides advanced machine learning APIs and services that enable developers, startups and enterprises to build data-rich mobile and enterprise applications.
Delivered as a service on Microsoft® Azure, HPE Haven OnDemand provides more than 60 APIs and services that deliver deep learning analytics on a wide range of data, including text, audio, image, social, web and video.

MyPOV – Sums up well what is being announced, basically Haven capabilities are being moved to the cloud, more specifically to Microsoft Azure. Not only good to see for HPE to leverage software product assets, but also good to see that the announced partnership between HPE and Microsoft has lead to real deliverables.
 
HPE first pioneered this effort in December 2014 with the beta launch of HPE Haven OnDemand. Today, HPE Haven OnDemand has more than 12,750 registered developers who currently generate millions of API calls per week, and have provided feedback to improve and refine the offering.

MyPOV – Good to know where the software was originally tested – likely at the time still with plans for HP Helion (now defunct). Kudos to HP to share the number of registered developers on the platform, something not all platform offerings do – but should do.
 
“The software industry is on the cusp of a new era of breakthroughs, driven by machine learning that will power data-driven applications across all facets of life,” said Colin Mahony (@cpmahony), Senior Vice President and General Manager, HPE Big Data, Hewlett Packard Enterprise. “HPE Haven OnDemand democratizes big data by bringing the power of machine learning, traditionally reserved for high-end, highly trained data scientists, to the mainstream developer community. Now, anyone can leverage our easy to use cloud-based service to harness the rich variety of data available today to build applications that produce new insights, differentiate businesses, delight customers and deliver a competitive advantage.”

MyPOV – Good quote from Mahony focusing on what we agree is the largest drive to next generation applications – the need for BigData based applications that enable ‘true’ analytics (more here) and machine learning, running in the cloud.
 
HPE offers a flexible approach that starts as a freemium service, enabling development and testing for free, and extends to a usage and SLA-based commercial pricing model for enterprise class delivery to support production deployments. Some of the capabilities offered by HPE Haven OnDemand include:

MyPOV – Good to see the ‘try / buy’ approach with no cost at entry, but that has become quickly the de-facto standard of new developer offerings.
 
Advanced Text Analysis – extracts the key meaning from language by employing powerful concept extraction capabilities that go beyond traditional approaches to obtain key concepts, entities and sentiment from text sources. 
Format Conversion – provides key functions to access, extract and convert information wherever it lives by supporting an extensive set of standard file formats and the ability to employ optical character recognition to extract text from an image.
HPE Haven Search OnDemand enterprise-search-as-a-service – delivers powerful cultivated search across on-premise or cloud data to deliver superior context-sensitive search results. 
Image Recognition and Face Detection – enables applications to detect specific image features and code around human-centric use cases to identify the gender of an individual or key information such as a brand logo from within an image.
Knowledge Graph Analysis – automatically delivers insights and predictions related to relationships and behavioral patterns among people, places and things. These capabilities are very useful for analyzing social media and related data. 
Predict and Recommend – enables developers to view patterns in business data to optimize business performance and build a set of self-learning functions that analyze, predict and alert based on structured datasets. 
Speech Recognition – employs advanced neural network technology to transcribe speech to text from video or audio files with support for over 50 languages. […]

MyPOV – A powerful set of services to build next generation applications, we see interactions with the ‘real’ world around facial, image and speech recognition to be very powerful drivers for next generation applications, as the sheer data and compute demands require a new application to be created, usually on a new platform, mostly in the cloud.

 
Strong Momentum Spanning Startups to Global Enterprises
HPE Haven OnDemand’s easy-to-use and proven service is generating strong appeal with independent developers, startups and global enterprises. HPE has fostered a global community of developers that use HPE Haven OnDemand through an active global hackathon program and comprehensive resources, docs, tutorials, code libraries and quick-start materials. This enthusiastic developer community has provided vital feedback to help HPE optimize the offering, and has leveraged HPE Haven OnDemand to create hundreds of innovative applications. A few examples include:

Ayni – a startup that won the Hack4Europe 2015 challenge created an app for facilitating cultural exchange and foreign language education using HPE Haven OnDemand. The app uses HPE Haven OnDemand’s speech recognition API to create text transcripts of live audio streams.
Blink – a “speed dating” mobile app startup, Blink, connects people in real time, enabling live-stream video chats. The app leverages HPE Haven OnDemand face detection and image recognition APIs to enable a more human dating app experience.
RingDNA – an enterprise provider of advanced inside sales is using HPE Haven OnDemand machine learning APIs to power part of their “conversation analytics” capabilities. HPE Haven OnDemand allowed RingDNA’s developers to get up and running and explore a wide number of recipes and algorithms that were flexible and powerful enough for our enterprise customers.
Social Capital – AngelHack Global Demo Day 2015 – San Francisco city winner and startup that created an app to provide human resources social assessments using the HPE Haven OnDemand Entity Extraction and Concept Extraction APIs.
Transparent – a developer participating in the 2015 World Bank hackathon challenge created the Transparent app to understand and visualize government spending in Africa using the HPE Haven OnDemand OCR API to analyze and extract insights.
MyPOV – Good to see that HPE has understood the importance of communities for developer success. And Hackathons are still popular to get developers motivated and creative. But most importantly seeing real world uptake of HAVEN capabilities across startups and ISVs like RingDNA.
 
Available Globally via the Microsoft Azure Public Cloud
All HPE Haven OnDemand APIs and Services are hosted on Microsoft Azure, leveraging the Hewlett Packard Enterprise and Microsoft strategic alliance around Azure, announced in December 2015. An industry leading public cloud platform, Azure ensures that developers building applications can benefit from easy access to HPE Haven OnDemand’s APIs and services with high performance and reliability from virtually any global location.
 

MyPOV – Good to see the December 2015 partnership announcement between Microsoft and HPE already showing deliverables. Haven needed a public cloud to build the targeted use cases and with the old HP Helion no longer around, chosing Azure is a good platform for next generation applications. TCO is not prohibitive and data center locations are competitive compare to other IaaS options.

 
“Organizations have massive quantities of information that can hold insights into business transformation, but harnessing it can be challenging,” said Garth Fort, General Manager, Partner and Channel Marketing, Cloud and Enterprise, Microsoft. “Leveraging the high performance and scalability of Azure, HPE Haven OnDemand brings our mutual customers a compelling solution to help turn their data into value.” 
MyPOV – Good to see the main partner quote here. As Microsoft signs more of these partnerships it will be interesting to see and key for customers to understand the financials and costs behind these offerings, as what partners offer on Azure (here today Haven) competes with other Microsoft offerings. On the flipside we know HPE had (and has) choices in regards of IaaS partners and as a long term enterprise software player has certainly thought this through and secured favorable terms for Haven customers.
 
Additional Information
HPE Haven OnDemand is immediately available worldwide. More information on HPE Haven OnDemand is available here. To read a blog post on HPE Haven OnDemand, click here. […]
MyPOV – Along with a long HP tradition HPE also announces when products are ready and available – good to see.

 

Overall MyPOV

This is a key announcement and a milestone for the new HP for enterprises, HPE. As HPE adjusted its public cloud strategy and pivoted away from the public cloud offering at Helion, it had software assets like Haven (remember it was HAVEn at some point – see below) that need a public cloud platform. Hence the partnership with Microsoft that was announced in December of last year. And with Azure the new public cloud home for Haven is certainly an attractive platform, that the ‘new’ Microsoft is working hard (and with some surprising moves) to make even more attractive.

On the concern side HPE will have to work hard to show value for using Haven. There are many other offerings for building the same breed of next generation applications on other and the same public cloud platform, so the value proposition for enterprises and developers needs to be clear. That this is possible can be seen, e.g. on the successful Heroku on AWS offering. Closer to HPE the revenue potential and growth needs to be understood – but we will see that over the next quarters.

But for now congratulations to HPE for putting Haven on Azure, good to see a forward strategy for the Haven software offering. Now it’s time to look at roadmap and commercial performance going forward.


More about HP
 
  • News Analysis - Updates to HP Helion Portfolio - a commentary - read here
  • Market Move - HP acquires Aruba - It's about Wifi - not the Caribbean - read here
  • News Analysis - HP acquired Eucalyptus - Genius or Panic on Page Mill road? Read here
  • News Analysis - Today's Billion in Cloud Investment is HP's and goes to Helion - read here
  • A tale of two cloud GAs - Google & HP - read here
  • The cloud is growing up - 3 signs from the news - read here
  • To HAVEn and have not - or: HP Bundles away - read here

And more about Microsoft:
  • News Analysis - Microsoft - New Hybrid Offerings Deliver Bottomless Capacity for Today's Data Explosion - read here
  • News Analysis - Welcoming the Xamarin team to Microsoft - read here
  • News Analysis - Microsoft announcements at Convergence Barcelona - Office365. Dynamics CRM and Power Apps 
  • News Analysis - Microsoft expands Azure Data Lake to unleash big data productivity - Good move - time to catch up - read here
  • News Analysis - Microsoft and Salesforce Strengthen Strategic Partnership at Dreamforce 2015 - Good for joint customers - read here
  • News Analyis - NetSuite announced Cloud Alliance with Microsoft - read here
  • Event Report - Microsoft Build - Microsoft really wants to make developers' lives easier - read here
  • First Hand with Microsoft Hololens - read here
  • Event Report - Microsoft TechEd - Top 3 Enterprise takeaways - read here
  • First Take - Microsoft discovers data ambience and delivers an organic approach to in memory database - read here
  • Event Report - Microsoft Build - Azure grows and blossoms - enough for enterprises (yet)? Read here.
  • Event Report - Microsoft Build Day 1 Keynote - Top Enterprise Takeaways - read here.
  • Microsoft gets even more serious about devices - acquire Nokia - read here.
  • Microsoft does not need one new CEO - but six - read here.
  • Microsoft makes the cloud a platform play - Or: Azure and her 7 friends - read here.
  • How the Cloud can make the unlikeliest bedfellows - read here.
  • How hard is multi-channel CRM in 2013? - Read here.
  • How hard is it to install Office 365? Or: The harsh reality of customer support - read here.

Find more coverage on the Constellation Research website here and checkout my magazine on Flipboard and my YouTube channel here
Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Tech Optimization Future of Work Next-Generation Customer Experience HP Microsoft SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Digital Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Executive Officer Chief Operating Officer

Workday Delivers Payroll in France

Workday Delivers Payroll in France

Today Workday announced the availability of its French Payroll capability, available at March 12th 2016. This brings to an end the long ago announced roadmap for payroll support beyond the USA and Canada, with the delivery of UK payroll last summer and French payroll right now.
 
 
So let’s comment the press release in our customary style, it can be found here:
 
PLEASANTON, Calif. and PARIS — March 10, 2016 — Workday, Inc. (NYSE: WDAY), a leader in enterprise cloud applications for finance and human resources, today announced the availability of Workday Payroll for France, a new application that enables organizations with employees in France to streamline the payroll process and address the full spectrum of enterprise payroll needs. Workday Payroll for France was built as part of Workday Financial Management and Workday Human Capital Management (HCM) to facilitate faster financial reporting, improve compliance control, and provide a more comprehensive view of global and local labor costs. 
MyPOV – Describes well what is being announced, including the benefits. Customers want native vendor support for payroll as they know from experience that usually less things can go wrong given the challenging legislative environment for payroll. The long term Workday observer will also notice the two time mention of Finance, the more junior area of automation at Workday, that is getting a lot of marketing attention these days.
 
Payroll has traditionally been challenging for global organizations due to complex regulatory requirements pertaining to each country and a lack of real-time insights on global payroll actuals. Building on the success of Workday Payroll for the U.S., Canada, and UK, Workday Payroll for France helps alleviate these pain points and equips customers with the flexibility, control, and insight required to support the unique aspects of their organizations. 
MyPOV – Again a description of the challenges of payroll, but interesting it stresses the global perspective into payroll. There is of course a pure French perspective on a local payroll, too. Workday is walking a fine line with native / own payroll support for USA, Canada, UK and now France – versus the ‘rest of the world’ that it supports with a partnership with payroll giant ADP (read the news analysis here). Customers may well ask for more native payroll support for further countries.
 
Customers using Workday Payroll for France benefit from: 
• Support for France-specific and Regional Legislative Requirements – For example, Workday Payroll for France automatically runs the newly-initiated Déclaration Sociale Nominative (DSN) process, making it simpler for organizations to replace a wide range of reports with a single statement. The application also supports the Single Euro Payments Area (SEPA) payment-integration initiative of the European Union.  
MyPOV – Good to see support for French legislative requirements. The nature of payroll requires that you support it all – including latest developments, e.g. the support for DSN.
 
• Real-time Analytics and Reporting – Organizations can now see what they are actually spending on workers via pre-built reports and analytics. In addition, the new application features a unique dashboard that enables customers to quickly identify and proactively manage high-impact compliance changes directly affecting their employee populations. 
MyPOV – Always good to show customers what is going on in payroll. Getting pro-active on compliance changes will be appreciated functionality for both payroll managers and employees. And who does not love dashboards?
  
• Automatic Tax Updates – With a cloud delivery model, new tax updates are automatically applied, eliminating the need for upgrades and patches required by on-premise payroll systems. 
MyPOV – Good to read and hear, but a table stake for a cloud based payroll system.
 
• Powerful and Flexible Calculation Tool – Workday’s robust calculation engine makes it easy for payroll administrators to handle complex requirements and run calculations as often as needed, with faster payroll-processing time.
MyPOV – Good to hear about the performance of the Workday calculation tool, good that it seems to be performing well, but it would be good to see specifics.
 
• High Configurability – Payroll administrators can easily configure unlimited earnings, deductions, pay groups, and pay frequencies to support calculation and reporting needs.  
MyPOV – Always good to see configurability, but that is what customers expect from a modern payroll system.
 
 
• Mobile Access – With one self-service application, employees can check pay slips and payment elections, and administrators have the ability to process payroll – anywhere, anytime. 
MyPOV – Good to see mobile self-service supported, as mobile is the most popular platform to access pay check information for today’s employees. That sounds up a pretty functional payroll release, which goes beyond the very bread and butter requirements for a payroll system. Good to see.

 
 
Comments on the News  -“Organizations in France require a modern application to simplify payroll processing and keep pace with recent legislation and a constantly-changing workforce,” said Barbry McGann, vice president, payroll and time products, Workday. “Workday Payroll for France unifies payroll and HR in the cloud, offering customers the control, flexibility, and insight required to gain complete visibility across the business and workforce to help fuel growth.”  
MyPOV – Good quote of McGann that sums up well what Workday delivers with French payroll.
 
Availability Workday Payroll for France will be generally available on March 12, 2016. […] 
MyPOV – Good to see imminent general availability of the new French payroll.

 
 

Overall MyPOV

Always good to see software vendors deliver product at the announced milestones and congrats to Workday to deliver the French payroll as announced over 2 years ago. That seemed a long way out, but time flies and French payroll is now available both for global customers with French employee populations and French customers of Workday. It looks like the first version is a robust, functional release, the mobile capabilities even going beyond a typical V1 of a payroll product.
 
On the concern side Workday will have to explain why it supports 4 country’s payroll, native in the product and no other countries, but uses the partnership to ADP instead. In recent meetings we noticed jokingly that Workday only supports payroll in countries with red, white and blue flags. Future will tell if Workday can keep it at 4 native payrolls in the  Workday product or if it will need to support more, we are seeing Germany and Japan as likely next candidates, but that’s our speculation and guess right now.

But for now, congrats to Workday to deliver native payroll to French customers and global customers who want / need a native French payroll.  

 
More on Workday
  • Progress Report - Workday Tech Summit - Good Progress, More Insights, Less Concerns - read here
  • News Analysis - Workday and ADP partner to Deliver a Seamless Customer Experience for Global Payroll - read here
  • Event Report - Workday Rising - Learning is there and good housekeeping - read here
  • News Analysis - Workday completes Talent Management with Learning - Finally - or too late? Read here.
  • Event Preview - What I would like Workday to address this Rising read here
  • News Analysis – Workday to Expand Suite of Applications for Healthcare Industry - with a SCM twist - read here
  • News Analysis - Workday supports UK Payroll - now speaks (British English) Payroll  - read here
  • Workday 24 - 'True' Analytics, a Vertical and more - now needs customer proof points - read here
  • First Take - Top 3 Takeaways from of Workday Rising Day 1 Keynote - The dawn of the analytics era - time to deliver Insight Apps - read here
  • Progress Report - Workday supports more cloud standard - but work remains - read here
  • Workday 22 - Recruiting and rich Workday 22 are here - read here
  • First Take - Why Workday acquired Identified - (real) Analytics matter - read here
  • Workday Update 21 - All about the user experience and some more - read here
  • Workday Update 20 - Mostly a technology release - read here
  • Takeaways from the Salesforce.com and Workday partnership - read here
  • Workday powers on - adds more to its plate - read here
  • What I would like Workday to address this Rising - read here
  • Workday Update 19 - you need to slow down to hurry up - read here
  • I am worried about... Workday - read here
 
 
Find more coverage on the Constellation Research website here and checkout my magazine on Flipboard and my Youtube channel here
Future of Work Matrix Commerce Innovation & Product-led Growth New C-Suite Tech Optimization Data to Decisions Next-Generation Customer Experience Revenue & Growth Effectiveness ADP workday AI Analytics Automation CX EX Employee Experience HCM Machine Learning ML SaaS PaaS Cloud Digital Transformation Enterprise Software Enterprise IT Leadership HR Chief People Officer Chief Customer Officer Chief Human Resources Officer

DialogTech launches SourceTrak 3.0, reinvents call tracking for digital marketers

DialogTech launches SourceTrak 3.0, reinvents call tracking for digital marketers

New Call Tracking For Digital Marketing – Well if you thought the only people who cared about phone calls to brand were people in the contact center, then you may be surprised by DialogTech’s new solution.  Actually its not that surprising me – considering a good 30-50% of calls to a contact center are marketing related – like which size would be better for me, or what are the measurements of the “box” or is the delivery date different than the ship date. These are all questions that would lead someone to “buy” something or not. However, often times those “marketing” type calls go to the contact center. The contact center rarely gets credit for answering lead conversion calls, but many of them do it all the time. Now to the announcement…

DialogTech, who provides of a most comprehensive, end-to-end call attribution and conversion platform for data-driven marketers, announced a breakthrough in call tracking for digital marketing with the release of SourceTrak 3.0. Nearly two years in the making, SourceTrak 3.0 enables the world’s largest organizations to realize the full benefits of call tracking for digital marketing, without the traditional limitations.

What is Included In This New Call Tracking for Digital Marketing?

According to eMarketer, 62.6 percent of digital ad spending in the U.S. this year will target smartphones and mobile devices. As a result, these ads will drive 162 billion calls to businesses by 2019, according to analyst firm BIA/Kelsey. So what is important is to know which programs generate the most calls (and customers) – because its a necessary step to measure and optimize digital marketing performance.

DialogTech’s newly re-architected and enhanced SourceTrak 3.0 technology solution is designed to meet the data, affordability, reliability and ease-of-implementation requirements of Fortune 1000 companies, large multi-location organizations and the marketing agencies they work with.

Enterprise marketing teams and agencies already analyze and optimize the customer journey for the search keywords, digital ads and website interactions that generate online engagement. DialogTech’s SourceTrak 3.0 enables them to do the same for offline phone call conversions. The call data appears alongside the online data in the marketing solutions they already use and requires no change to current processes and causes no disruption to digital ads or website performance.

What is Included In This New Call Tracking for Digital Marketing?

  • Full Attribution For Every Phone Number on a Website – Provides complete call attribution data – including the search keywords, digital ads, referring websites and webpages that drove the call – for every call from every phone number displayed on a website, including every number shown in “Find a Dealer” and “Find an Agent” webpages consumers use to locate and call their closest location or agent.
  • Keyword Attribution for Every Call From Google AdWords – Whether a call comes from a “Call” button in a Google search ad or from a searcher who clicks through to a website, SourceTrak 3.0 captures complete keyword, session and caller data for every call. That call attribution data, as well as any revenue generated from the call, can be imported directly into Google AdWords alongside online data for a complete and accurate analysis of search advertising ROI.
  • Accurate, Spam-Free Call Data – SourceTrak 3.0 call attribution data is now protected by SpamSentry™ technology, which prevents spam calls from distorting marketing data and frustrating sales agents.
  • Fast, Seamless Implementation – SourceTrak 3.0 enables marketing teams and agencies to implement call tracking on any website in a few clicks without any help from IT, any negative impact on website performance or SEO ranking or any disruption to existing digital ads.
  • Affordability – Only SourceTrak 3.0 has heartbeat technology that enables businesses to capture complete call attribution with the fewest phone numbers – and lowest cost – of any call tracking provider.

SourceTrak 3.0 technology is available as part of the DialogTech Voice360® platform, which also includes an integrated suite of marketing solutions for caller qualification and scoring, contextual call routing and management, conversation analytics and spam call blocking. All SourceTrak 3.0 features are backwards compatible with SourceTrak 2.0, and all SourceTrak 2.0 users have automatic access to the new functionality.

For more information on how to get started today, contact DialogTech at DialogTech.com.

If you are part of a contact center, how many of your calls are marketing oriented vs “help me fix this” type calls? Should Marketing pay for some of the contact center costs if a percentage of the calls are lead conversion related? These are the questions we will be asking ourselves as we see Marketing, Sales and Customer Service converge into commerce.

@drNatalie Petouhoff, VP and Principal Analyst, Constellation Research, Covering Customer Facing Applications.

Share

Marketing Transformation Chief Marketing Officer

Honorees Announced for the 2016 Marketing Hall of Femme!

Honorees Announced for the 2016 Marketing Hall of Femme!

Direct Marketing News is honored to announce the 2016 Honorees to the Marketing Hall of Femme! These incredible women have storied careers, take risks, and push the industry and their companies forward with their edgy marketing strategies. On April 8, 2016, we’re celebrating all of their outstanding achievements and more.

New this year, the event features a Leadership Summit in addition to the Awards Luncheon. Attendees will meet the 2016 Leading Ladies, hear the first-person narratives behind their success stories, and attend educational sessions that explore the challenging yet rewarding roles of female leaders in the marketing industry today.

Introducing 2016’s most influential women in marketing:

marketing2 marketing marketing3

Learn more about the inspirational event that is the Marketing Hall of Femme:
Click here to read about previous years’ honorees and keynote speakers, view video interviews, and more.

 

My POV: Of course it would be to say that we need more women in technology. That’s it. I will keep it simple. And to congratulation all the amazing women named here and that work in the world of technology every day! You are my heros.

@DrNatalie Petouhoff, VP and Principal Analyst, Constellation Research
Covering Cloud and IoT That Drive Better Business Results and Awesome Customer Experiences

Share

Marketing Transformation Chief Marketing Officer

Ultimate’s UltiConnect Day #1 Keynote - 3 Takeaways

Ultimate’s UltiConnect Day #1 Keynote - 3 Takeaways

We are attending Ultimate’s user conference UltiConnect at the Bellagio in Las Vegas. The conference kicked off with a welcome reception at Drais, a great location to get attendees to enjoy Las Vegas. And with over 2000 attendees and over 60% first time attendees the conference sees record attendance. 

 
 
So here are my Top 3 takeaways from the keynote:
  • People Centricity remains front and central for Ultimate – Ultimate has been stressing people centricity for a long time, and no surprise it was front and center at today’s keynote. CTO Adam Rogers walked us through the three main directions for Ultimate improving people / employee experience: 
    • Don’t waste people’s time – A good direction, see payroll innovation as key deliverable below. 
    • Build stronger leaders – Equally good – Ultimate will help leaders become more effective with the help of suggested actions.
    • Let HR focus on Strategy – Probably the best of all three – as the lack of strategic aspects is moving many HR leaders away from the executive table (and conversations). 
 
  • Innovation to Payroll with PayInsights – During the keynote Martin Hartshorne walked us through the importance of getting payroll right. As we have pointed out before – everything stops when the paycheck isn’t right. For the individual employee who is talking back to HR, and for all of HR and the enterprise when a large employee group is affected. Throughout the next 12 months Ultimate will work on capabilities to help employees to better understand their paychecks as well aid them to interact more efficiently with HR. 
 
  • Ease to do business – Software vendors often get set in their action and processes as they scale their operations, revisiting best practices is a good move and Rogers announced three new initiatives:
    • Tierless Support – Nobody wants to wait for the next level support agent to solve an issue, so it is good to see Ultimate eliminating the tiering of support representatives. This does not raise the experience of the support representatives so it will be interesting to see how Ultimate will solve that. 
    • Learning Center – Ultimate will offer new ways to understand its software, always a good move.
    • New online service experience – The fastest way to solve support is self-service, so it’s good to see that Ultimate is making it easier for customers to resolve support issues directly, and themselves. Customers love empowerment.
 

MyPOV

A good start for Connections, which is not only a customer conference, but also a customer appreciation and Ultimate is striking a good balance between the two. Focusing on people centricity, more specifically on employee experience is a good true north for any HR software vendor, and it is good to see Ultimate build more capabilities into that direction. Coupled with an improvement in know how transfer and customer support, Ultimate is doing the right things to become an even more attractive HCM vendor. Again a good start for the UltiConnect conference, stay tuned. 

More on Ultimate:
  • Event Report - Ultimate Software Connection - People first and an exciting roadmap ahead - read here
  • First Take - Ultimate Software UltiConnect Day #1 Keynote - read here
  • Event Report - Ultimate's UltiConnect - Off to a great start, but the road(map) is long - read here.
  • First Take – 3 Key Takeaways from Ultimate’s UltiConnect Conference Day 1 keynote – read here.
Find more coverage on the Constellation Research website here and checkout my magazine on Flipboard and my YouTube channel here

 
Future of Work Tech Optimization Innovation & Product-led Growth New C-Suite Data to Decisions Next-Generation Customer Experience Revenue & Growth Effectiveness Sales Marketing Digital Safety, Privacy & Cybersecurity AI Analytics Automation CX EX Employee Experience HCM Machine Learning ML SaaS PaaS Cloud Digital Transformation Enterprise Software Enterprise IT Leadership HR LLMs Agentic AI Generative AI business Marketing IaaS Disruptive Technology Enterprise Acceleration Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief People Officer Chief Customer Officer Chief Human Resources Officer Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

IoT Pilots should include basic security functional elements for experience Mastering IoT security means mastering new security techniques

IoT Pilots should include basic security functional elements for experience Mastering IoT security means mastering new security techniques

Security starts with the identification of risks that in turn defines actions that are required. IoT devices range from simple sensors to embedded intelligence in sophisticated machines, and their deployment covers the whole spectrum of industries, and applications, as such there is not a single standard answer. It would seem unnecessary to consider imposing IT security practices to pilot a handful of simple monitoring sensors in a building, but a pilot should be the opportunity to learn about the technology and security aspects as well as the business benefits.

 

Current risk justification often focuses on the obvious difference in the security risk profiles; using as a simple example Building Management IoT deployment downstream data flows from IoT temperature monitoring points are seen to have low to minimal risks against upstream command responses to activate power, heating or other building functions.

 

But this misses the risk to the Enterprise from each and every IoT sensor as a network access point that could be compromised. Eurecom, French Technology Institute, discovered 38 vulnerabilities in the Firmware of 123 IoT sensing products. Hundreds moving to thousands of IoT connected devices multiplies the risk of security breaches to new levels.

 

Experts believe it likely that many Pilots and initial IoT deployments will occur without an adequate understanding of the security risks, and require expensive retro attention. A blogs cannot provide in depth coverage of the topic, but it is an excellent format to draw attention to the issues and to provide links to more in depth papers. For simplicity, and inline with the popularity of IoT for Building Management, this blog refers to IoT sensor deployment in Buildings as an illustrative use case.

 

Before considering new security capabilities that have been, or are being, developed for the IoT market place, it pays to understand the basic architectural model.  The so-called Final Mile Architecture described in some detail in the blog the importance of using Final Mile Architecture in an IoT Pilot stressed the importance of understanding the use of Connection, Asset Data and Mapping, and Data Flow management. However this blog did not mention the need to consider security aspects, as an example the importance of Firewall protected ‘safe’ location of the IoT Asset Data and Mapping Engine together with the Data Flow Engine.

 

Whilst Network Connection management is understood from its role in IT systems there is very little understanding of the use, and role, of IoT Gateways, Asset Data and Mapping, or Data Flow engines as core building blocks in IoT deployment, let alone how to use each to reducing security risk and vulnerability.

 

Most IoT sensor deployments will make use of one of the specialized physical network types, such as Zigbee, that interconnect low value sensor points and will connect to the main ‘Internet’ through an IoT Gateway. IoT Gateways come in all forms from simple physical interconnection of different physical network media to those with sophisticated intelligent management that introduce security capabilities. Intel publish a good guide to IoT Gateways in general with Cisco offer a useful FAQs on the topic.

 

The choice of an IoT Gateway product for a simple/pilot deployment level tends to focus on the primary physical network function of a Gateway, rarely recognizing that a Gateway is a key access point to an Enterprise, or public network and should be secure.

 

The IoT Gateway coupled with Network Connection management should be considered as the first major security point in IoT architecture. Some IoT Gateways add encryption to traffic forwarded across the network as a further security feature.  Citrix publish a useful guide to the security implications of IoT Gateways. and Intel offer a guide to the implementation of security profiles in IoT Gateways. IoT Gateway physical locations are usually decided by the transmission capabilities of the sensor side network, but the physical location of the next two functional blocks, the Asset Data and Mapping Engine and Flow Data engine, is a critical security consideration.

 

The IoT architectural question relating to where, and how, processing power is related to network architecture was outlined in the blog; IoT Architecture. But the arrangement and physical location of the key functions of Asset Data and Mapping engine and the Data Flow engine in relation to security will be dependent on individual deployment factors. Therefore the following statements are general principals applied to the Building Management example.

As the role and capabilities of an Asset Data and Mapping engine, and Data Flow engine are not well understood it might be desirable to read a previous blog IoT Data Flow Management, the science of getting real value from IoT data. The white paper Data Management for IoT provides more detail in the use of IoT data together with its differences to conventional data. However the best explanation of Asset Data and Mapping with its function in adding context data and location to simple IoT sensor event data comes from watching the Asset Mapping Explainer video on Building Management.

It is good security practice to keep sensor event traffic across the network semi anonymous and not append the critical contextual data that identifies the sensor, location and complete data file from the Asset Data and Mapping engine until securely within the Firewall/Data Center and ready for processing.

Just as few pilot installations appreciate the full role of the IoT Gateway beyond physical functionality, few pilots include the means to manage large numbers of IoT sensors beyond a simple recognizable representative number on a dedicated GUI screen. Good practice will use an IoT Gateway with encryption to ensure that all data traversing the network to the Asset Data and Mapping engine has low vulnerability. After the full data set is appended to the sensor event data by the Asset Data and Mapping engine it becomes an important architectural consideration to limit where on the network this data is accessible.

Similar considerations apply to the Data Flow engine in terms of its location, but also as to its role and use as a part of the IoT security architecture. A Data Flow engine, as its name suggests with its functionally described in the blogs previously referenced, can ensure that not all data is flooded across the entire network.

Cleverly positioned IoT Data Flow engines can control and manage data using elements of the data payload to direct to required destinations. Avoiding all the data being available over the entire network is another basic security good practice in IoT architectural design.

IoT Architecture incorporating basic security elements in its design is a new discipline, and as such really should be incorporated into proving pilots to gain experience in these new functional building blocks before moving to scale deployments.

 As IoT gains momentum and increasingly intelligent devices are interconnected Security becomes an increasingly issue, witness the challenges with Mobile Phones and Tablets today. Developing a full understanding of the all the elements and vulnerabilities requires an effort to master the topic, and the rest of this blog is devoted to providing the necessary links.

The development of both new security risk and protection methodologies and new technology capabilities is under way and there are several different initiatives driving or coordinating efforts that provide interesting details.  

Two good starting points are; 1) The International IoT Security Foundation for a general appreciation of the subject broken down into the various elements and issues in a multipart series. 2) The ambitious OWASP (Open Web Application Security Project) Internet of Things Project describes itself as designed to help manufacturers, developers, and consumers better understand the security issues associated with the Internet of Things, and to enable users in any context to make better security decisions when building, deploying, or assessing IoT technologies. The project looks to define a structure for various IoT sub-projects such as Attack Surface Areas, Testing Guides and Top Vulnerabilities.

A more commercial view comes from WindRiver, an Intel company, whose products are embedded into Intel processors, and from there into other products, in their white paper on Security in the Internet of Things with the interesting sub title ‘Lessons from the past for the connected future’. All these references provide both methods and architectural appreciation of the challenge with solutions using current technology. There are however two new technology approaches, one aiming to authenticate process interactions and the other to authenticate actual processor functions.

BlockChain has suddenly gained a big following for its possibilities in ensuring that ‘chain’ reactions, or interactions, can be tested and established as secure in their outcomes. Though somewhat infamous for its relationship to Bitcoin Internet currency, nevertheless it has much wider applicability in the ‘any to any’ environment of IoT. IBM has built a complete Blockchain demonstrator reported by CIO online under the headline of IBM Proof of Concept for Blockchain powered IoT

PUF standing for Physically Unclonable Function is the technique for using the variations introduced during chip production to be read as a unique ‘signature’ for the chip as part of establishing its authenticity. This unique signature is used to create a unique encrypted checksum reply to an identity challenge enabling several different possible uses. Wikipedia provides a good description of the basic technique and its principle applications.

In conclusion the following quote is taken from the concluding summary of the Telefonica White paper ‘Scope, Scale and Risk as never before’

The networks IoT creates will be some of the biggest the World has ever seen. And that makes them enormously valuable to attackers . . . it is apparent that the Internet of Things is growing far faster and with a higher user knowledge base than its predecessor – The Internet itself. And this raises significant concerns.

What is a pilot today, and a closed IoT network tomorrow, one day will be part of the biggest network the World has ever known so in planning a pilot, or a deployment, it is absolutely necessary to understand the security dimension.

 

New C-Suite

Cisco Spark - On the Road To Success

Cisco Spark - On the Road To Success

Today Cisco announced two very strategic moves geared towards helping the success of their Cisco Spark collaboration platform.  

For those of you unfamiliar with Spark, it's Cisco's latest platform for team communication and collaboration. It brings together chat, voice calls, video meetings and file sharing. After several failed attempts at this space with Cisco Quad and Webex Social, Cisco finally appears to be on the right track in developing a platform and ecosystem that is more in line with what customers are looking for in a simple, cloud based, integrated tool. That said, there is still some confusion and overlap in their platform between Spark, WebEx, Jabber and Tropo. This is also a very competitive market with products like Slack, CoTap, Glip, HipChat, Unify Circuit, Ryver and others, which makes today's announcements event more significant.

Cisco Spark

 

The Success of a Platform Revolves Around Its Ecosystem

From their press release: "We want to make sure all great ideas come to life. We don’t want a lack of funding or support to get in the way. So in partnership with Cisco® Investments we have created a fund to invest $150 million in the Cisco Spark ecosystem. This fund will cover direct investments, joint development, additional enhancements and developer support."

As I mentioned above, the communication and collaboration market is a highly competitive one. It takes a lot to differentiate in this space, as most products have very similar features. Investing $150M to have 3rd party developers extend and enhance the functionality of Spark shows serious commitment from Cisco. Software vendors like Microsoft, Google, Salesforce and IBM already have large partner ecosystems, and recently Slack announced an $80M fund for developers. In a pre-briefing before today's announcement I joked with Cisco that they must have tried hard to get $160M so they could say they doubled Slack's deal.

With so many options available to developers today, it's vital that vendors build strong and trusting relationships with their partners. They must offer them training, support, financing, marketing, and more. While today's announcement is a great first step, the true measurements with come in 3, 6, 12 months when we see how this fund has been leveraged and what solutions have been created because of it. I hope Cisco is open with this information and shares several success stories. 

For more information visit the Cisco Spark Developer Fund website.

 

Build and Buy

Cisco also announced today the acquisition of search company Synata. I've been looking at this company since early last year, as they claim to help solve a problem I've been very vocal about around "social collaboration", this struggle of information and input overload. While everyone likes to complain about their overflowing mail inboxes, the reality for most people is that social tools can quickly become even more overwhelming and unmanageable that email ever was. Yes, sharing information is certainly better than it being locked away. However with departments, teams, companies even entire communities sharing information, finding the right people and content can quickly become a daunting task. Vendors such as Microsoft (with Delve), IBM (via Watson), Google (with Google Now) and Salesforce (via SalesforceIQ) have been focused on not just helping people "search" for information, but instead helping them discover things related to the context of tasks they are working on. The acquisition of Synata signals Cisco's start down a similar path to helping people connect with the content they need to help them get their work done.

At the higher level, today's acquisition shows me that Cisco is not trying to build everything on their own. They are willing to invest in the Spark platform and acquire companies that fill gaps in the platform. They have done this in the past, with various degrees of success. For example: 

  • June 2014, Cisco acquired a Kollaborate.io (Assemblage), one of the early vendors that was creating "digital canvases" where information from multiple tools could be "assembled" on a single screen where people could comment on and share the information
  • Dec 2013, Cisco acquired Collaborate.com, a social task management vendor. Spark is still lacking in task management features.
  • Aug 2011, Cisco acquired Versly, Microsoft Office document creating, viewing and sharing

I hope that today's acquisition of Synata manifests itself in Spark more successfully then these acquisitions did in WebEx Social.

 

Leverage Your Base

While acquiring new customers is always the goal for software vendors, I think it is important for Cisco to focus on enhancing the collaboration experiences of their existing customers. I've been on thousands of WebEx meetings over the years. Not once have I been part of a collaborative process before the meeting, then had that information integrated during the meeting, then had the content, conversations, follow-ups, etc, from the meeting persist after the meeting ended. I hope to see Cisco use Spark to create a highly collaborative experience before, during and after WebEx meetings, perhaps merging the two one day.  Why have both?  I had hoped to Citrix would do this with GoToMeeting and Podio, but they did not. I hoped Microsoft would do this with Yammer and Skype (Lync), but they did not. I hoped IBM would do this with Sametime and Connections, but they did not. Let's see what Cisco can do.

 

On The Right Road

In conclusion, both of today's moves show Cisco's commitment to Spark not just as a product, but a platform for developers to build solutions that help people get work done. I applaud them on both the investment fund and the acquisition, moves that validate why I named Cisco Spark one of the 18 Products Shaping the Future of Work. Now Ciscos's next step is to prove success with customer and partner case studies.

 

 

 

 

 

 

 

Future of Work Sales Marketing Next-Generation Customer Experience Revenue & Growth Effectiveness Data to Decisions Innovation & Product-led Growth New C-Suite Digital Safety, Privacy & Cybersecurity cisco systems Chief Marketing Officer Chief People Officer Chief Revenue Officer