Results

IBM Blockchain as a Service and Hyperledger Fabric forge a new path

IBM Blockchain as a Service and Hyperledger Fabric forge a new path

It’s been a big month for blockchain.

  • The Hyperledger consortium released the Fabric platform, a state-of-the-art configurable distributed ledger environment including a policy editor known as Composer.
  • The Enterprise Ethereum Alliance was announced, being a network of businesses and Ethereum experts, aiming to define enterprise-grade software (and evidently adopt business speak).
  • And IBM launched its new Blockchain as a Service at the Interconnect 2017 conference in Las Vegas, where blockchain was almost the defining theme of the event.  A raft of advanced use cases were presented, many of which are now in live pilots around the world.  Examples include shipping, insurance, clinical trials, and the food supply chain.

I attended InterConnect and presented my research on Protecting Private Distributed Ledgers, alongside Paul DiMarzio of IBM and Leanne Kemp from Everledger. 

Disclosure: IBM paid for my travel and accommodation to attend Interconnect 2017.

Ever since the first generation blockchain was launched, applications far bigger and grander than cryptocurrencies have been proposed, but with scarce attention to whether or not these were good uses of the original infrastructure.  I have long been concerned with the gap between what the public blockchain was designed for, and the demands from enterprise applications for third generation blockchains or "Distributed Ledger Technologies" (DLTs).  My research into protecting DLTs  has concentrated on the qualities businesses really need as this new technology evolves.  Do enterprise applications really need “immutability” and massive decentralisation? Are businesses short on something called “trust” that blockchain can deliver?  Or are the requirements actually different from what we’ve been led to believe, and if so, what are the implications for security and service delivery? I have found the following:

In more complex private (or permissioned) DLT applications, the interactions between security layers and the underlying consensus algorithm are subtle, and great care is needed to manage side effects. Indeed, security needs to be rethought from the ground up, with key management for encryption and access control matched to often new consensus methods appropriate to the business application. 

At InterConnect, IBM announced their Blockchain as a Service, running on the “Bluemix High security business network”.  IBM have re-thought security from the ground up.  In fact, working in the Hyperledger consortium, they have re-engineered the whole ledger proposition. 

And now I see a distinct shift in the expectations of blockchain and the words we will use to describe it.

For starters, third generation DLTs are not necessarily highly distributed. Let's face it, decentralization was always more about politics than security; the blockchain's originators were expressly anti-authoritarian, and many of its proponents still are. But a private ledger does not have to run on thousands of computers to achieve the security objectives.  Further, new DLTs certainly won't be public (R3 has been very clear about this too – confidentiality is normal in business but was never a consideration in the Bitcoin world).  This leads to a cascade of implications, which IBM and others have followed. 

When business requires confidentiality and permissions, there must be centralised administration of user keys and user registration, and that leaves the pure blockchain philosophy in the shade. So now the defining characteristics shift from distributed to concentrated.  To maintain a promise of immutability when you don't have thousands of peer-to-peer nodes requires a different security model, with hardware-protected keys, high-grade hosting, high availability, and special attention to insider threats. So IBM's private blockchains private blockchains run on the Hyperledger Fabric, hosted on z System mainframes.  They employ cryptographic modules certified to Common Criteria EAL 5-plus and others that are designed to FIPS-140 level 4 (with certification underway). These are the highest levels of security certification available outside the military. Note carefully that this isn't specmanship.  With the public blockchain, the security of nodes shouldn't matter because the swarm, in theory, takes care of rogue miners and compromised machines. But the game changes when a ledger is more concentrated than distributed.  

Now, high-grade cryptography will become table stakes. In my mind, the really big thing that’s happening here is that Hyperledger and IBM are evolving what blockchain is really for

The famous properties of the original blockchain – immutability, decentralisation, transparency, freedom and trustlessness – came tightly bundled, expressly for the purpose of running peer-to-peer cryptocurrency.  It really was a one dimensional proposition; consensus in particular was all about the one thing that matters in e-cash: the uniqueness of each currency movement, to prevent Double Spend.

But most other business is much more complex than that.  If a group of companies comes together around a trade manifest for example, or a clinical trial, where there are multiple time-sensitive inputs coming from different types of participant, then what are they trying to reach consensus about?

The answer acknowledged by Hyperledger is "it depends". So they have broken down the idealistic public blockchain and seen the need for "pluggable policy".  Different private blockchains are going to have different rules and will concern themselves with different properties of the shared data.  And they will have different sub-sets of users participating in transactions, rather than everyone in the community voting on every single ledger entry (as is the case with Ethereum and Bitcoin).

These are exciting and timely developments.  While the first blockchain was inspirational, it’s being superseded now by far more flexible infrastructure to meet more sophisticated objectives.  I see us moving away from “ledgers” towards multi-dimensional constructs for planning and tracing complex deals between dynamic consortia, where everyone can be sure they have exactly the same picture of what’s going on. 

In another blog to come, I’ll look at the new language and concepts being used in Hyperledger Fabric, for finer grained control over the state of shared critical data, and the new wave of applications. 

 

Digital Safety, Privacy & Cybersecurity Future of Work Matrix Commerce Tech Optimization Innovation & Product-led Growth IBM AI Blockchain Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Supply Chain Officer Chief Digital Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Cloudera Focuses Message, Takes Fifth On Pending Moves

Cloudera Focuses Message, Takes Fifth On Pending Moves

Cloudera executives can’t talk about IPO or cloud-services rumors. Here what’s on the record from the Cloudera Analyst Conference.

There were a few elephants in the room at the March 21-22 Cloudera Analyst Conference in San Francisco. But between a blanket “no comment” about IPO rumors and non-disclosure demands around cloud plans -- even whether such plans exist, or not -- Cloudera execs managed to dance around two of those elephants.

The third elephant was, of course, Hadoop, which seems to be going through the proverbial trough of disillusionment. Some are stoking fear, uncertainty and doubt about the future of Hadoop. Signs of the herd shifting the focus off Hadoop include Cloudera and O’Reilly changing the name of Strata + Hadoop World to Strata Data. Even open-source zealot Hortonworks has rebranded its Hadoop Summit as  DataWorks Summit, reflecting that company’s diversification into streaming data with its Apache NiFI-based Hortonworks DataFlow platform.

Mike Olson, Cloudera's chief strategy officer, positions the company as a major vendor
of enterprise data platforms based on open-source innovation.

At the Cloudera Analyst Conference, Chief Strategy Officer Mike Olson said that he couldn’t wait for the day when people would stop describing his company as “a Hadoop software distributor” mentioned in the same breath with Hortonworks and MapR. Instead, Olson positioned the company as a major vendor of enterprise data platforms based on open-source innovation.

MapReduce (which is fading away), HDFS and other Hadoop components are outnumbered by other next-generation, open-source data management technologies, Olson said, and he noted that there are some customers who are just using Cloudera’s distributed and supported Apache Spark on top of Amazon S3, without using any components of Hadoop.

Cloudera has recast its messaging accordingly. Where years ago the company’s platform diagrams detailed the many open source components inside (currently about 26), Cloudera now presents a simplified diagram of three use-case-focused deployment options (shown below), all of which are built on the same “unified” platform.

Cloudera-developed Apache Impala is a centerpiece of the Analytic DB offering, and it competes with everything from Netezza and Greenplum to cloud-only high-scale analytic databases like Amazon Redshift and Snowflake. HBase is the centerpiece of the Operational DB offering, a high-scale alternative to DB2 and Oracle Database on the one hand and Cassandra, MapR and MemSQL on the other. The Data Science & Engineering option handles data transformation at scale as well as advanced, predictive analysis and machine learning.

Many companies start out with these lower-cost, focused deployment options, which were introduced last year. But 70% to 75% percent of customers opt for Cloudera’s all-inclusive Enterprise Data Hub license, according to CEO Tom Reilly. You can expect that if and when Cloudera introduces its own cloud services, it will offer focused deployment options that can be launched, quickly scaled and just as quickly turned off, taking advantage of cloud economies and elasticity.

Navigating around the non-disclosure requests, here are a few illuminating factoids and updates from the analyst conference:

Cloudera Data Science Workbench: Announced March 14, this offering for data scientists brings Cloudera into the analytic tools market, expanding its addressable market but also setting up competition with the likes of IBM, Databricks, Domino Data, Alpine Data Labs, Dataiku and a bit of coopetition with partners like SAS. Based on last year’s Sense acquisition, Data Science Workbench will enable data scientists to use R, Python and Scala with open source frameworks and libraries while directly and securely accessing data on Hadoop clusters with Spark and Impala. IT provides access to the data within the confines of Hadoop security, including Kerberos.

Apache Kudu: Made generally available in January, this Cloudera-developed columnar, relational data store provides real-time update capabilities not supported by the Hadoop Distributed File System. Kudu went through extensive beta use with customers, and Cloudera says it’s seeing a split of deployment in conjunction with Spark, for streaming data applications, and with Impala, for SQL-centric analysis and real-time dashboard monitoring scenarios.

MyTake On Cloudera Positioning and Moves

Yes, there’s much more to Cloudera’s platform than Hadoop, but given that the vast majority of customers store their data in what can only be described as Hadoop clusters, I expect the association to stick. Nonetheless, I don’t see any reason to demure about selling Hadoop. Cloudera isn’t saying a word about business results these days -- likely because of the rumored IPO. But consider the erstwhile competitors. In February Hortonworks, which has been public for two years, reported a 39% increase in fourth-quarter revenue and a 51% increase on full-year revenue (setting aside the topic of profitability). MapR, which is private, last year claimed (at a December analyst event) an even higher growth rate than Hortonworks.

Assuming Cloudera is seeing similar results, it’s experiencing far healthier growth than any of the traditional data-management vendors. Whether you call it Hadoop and Spark or use a markety euphemism like next-generation data platform, the upside customers want is open source innovation, distributed scalability and lower cost than traditional commercial software.

As for the complexity of deploying and running such a platform on premises, there’s no getting around the fact that it’s challenging – despite all the things that Cloudera does to knit together all those open-source components. I see the latest additions to the distribution, Kudu and the Data Science Workbench, as very positive developments that add yet more utility and value to the platform. But they also contribute to total system complexity and sprawl. We don’t seem to be seeing any components being deprecated to simplify the total platform.

Deploying Cloudera’s software in the cloud at least gives you agility and infrastructure flexibility. That’s the big reason why cloud deployment is the fastest-growing part of Cloudera’s business. If and when Cloudera starts offering its own cloud services, it would be able to offer hybrid deployment options that cloud-only providers, like Amazon (EMR) and Google (DataProc) can’t offer. And almost every software vendor embracing the cloud path also talks up cross-cloud support and avoidance of lock-in as differentiators compared to cloud-only options.

I have no doubt that Cloudera can live up to its name and succeed in the cloud. But as we’ve also seen many times, the shift to the cloud can be disruptive to a company’s on-premises offerings. I suspect that’s why we’re currently seeing introductions like the Data Science Workbench. It’s a safe bet. If and when Cloudera truly goes cloud, and if and when it becomes a public company, things will change and change quickly.

Related Reading:
Google Cloud Invests In Data Services, Scales Business
Spark Gets Faster for Streaming Analytics
MapR Ambition: Next-Generation Application Platform

 

Media Name: Mike Olson.jpg
Media Name: Cloudera Deployment Packages.jpg
Data to Decisions Tech Optimization Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI ML Machine Learning LLMs Agentic AI Generative AI Robotics Analytics Automation Quantum Computing developer Metaverse VR Healthcare Supply Chain Leadership business Marketing finance Customer Service Content Management Chief Information Officer Chief Digital Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

One Word to Help Save Sears: Auctions

One Word to Help Save Sears: Auctions

1

Sears continues to fall, and may soon be out of business.

If I were their CEO and someone asked me how to turn the store around with one last Hail Mary pass, my answer would be one word…”auctions”.

Yes, make every item available in every Sears store available for auction.

Bear with me.

The challenge for retail chains like Sears is that you have the in-store experience, and you have the online experience, and the two rarely converge or complement each other.

When you’re shopping in-store and you find an item and you may wonder: should I buy it, should I wait for it to go on sale, should I try to find it at another store, or should I shop online?

And when you’re shopping online, you simply search for the product and buy it at whatever store has the cheapest price (which is often Amazon).

So how do retailers overcome this?

By using auctions that:

  • encourage shoppers to by from you (and not competitors)
  • encourage in-store visits
  • converge in-store and online shopping experiences
  • encourage loyalty using gamification

Here’s an example.

I walk into Sears looking for a coffee machine.  I see one that I like and it’s $250.

Just like an eBay auction, the $250 is the “Buy Now” price, what I have to pay to buy it right now.

If I pay the “Buy Now” price I’ll automatically get 10 points (I’ll get to the points in a minute).

My other option is to not buy now, but bid on the coffee machine.

So I take out my Sears app on my phone, and I scan the coffee machine UPC code and enter into an auction for the coffee machine.

The coffee machine is automatically added to Sears website and available for bidding by online users.

And just like eBay, there’s a minimum price Sears has set (that no one knows) so that it won’t lose money on the sale, and the auction is open for seven days.  And anyone online can bid on it.

Therefore, if I want to try to get the coffee machine at a lower price (say I bid a maximum of $200), I have to not only enter into an auction I may not win, but I have to wait seven days to purchase it if I do win.

To bid, or not to bid (and just buy)?

Here’s where the points system comes in.

You want to encourage certain behaviour:

  • Shopping in-store
  • Purchasing at “Buy Now” prices instead of always seeking lowest price through auctions
  • Repeat purchases

So a point system would be used as a sort of gamification to reward certain behavior.

For example, someone who has shopped a lot in-store, and has purchased a lot of “Buy Now” prices would have a certain amount of points, or a certain score.  So, if they did enter an auction, they would have more “buying power” over people who had smaller scores.  Thus, a Sears shopper with a 90 score who bids $200 on my coffee machine would win the auction over shoppers with a score of 5, even if the shopper with the lower score bid $205 on the coffee machine.

And loyal shoppers with high scores will also be rewarded with offers that drive them back into the store: Sears Travel, Sears Photos, Sears Makeovers, etc.  (in other words, those things that you can’t necessarily buy online but have to visit the store for).

Retailers like Sears need to understand that with online shopping, people are going to price and compare whether they’re in the store or not.

In fact, technically your items are already in an auction – with the prices of your competitors.

So why not bring an auction-style system right to your customer, and reward them and their loyalty in the process?

Sound crazy?

You bet.

But so is the status quo, and that hasn’t been working too well for Sears.

The post One Word to Help Save Sears: Auctions appeared first on senseimarketing.testing.our-projects.info.

Marketing Transformation

LinkedIn Unveils Enterprise Edition of Sales Navigator, Extends Integration with CRM Systems

LinkedIn Unveils Enterprise Edition of Sales Navigator, Extends Integration with CRM Systems

Constellation Insights

LinkedIn is betting large organizations will be willing to pay up to $1,600 per seat per year for a new Enterprise edition of Sales Navigator, which it says will generate higher productivity and results for social selling efforts. Here are the key details from LinkedIn's announcement:

Until now, if you were looking for a warm introduction to a lead, you could go through your personal LinkedIn connections, or use TeamLink, which pools the networks of all the Sales Navigator seat holders in your company. But we know your reps are probably not connected on LinkedIn to the vast majority of employees at your company, and not every employee in your company needs a seat of Sales Navigator (as much as we’d like that).

TeamLink Extend solves that by letting anyone in your organization opt-in their LinkedIn network to the TeamLink pool. That means, if you’re trying to reach a prospect, you can quickly see if anyone in your company has a connection with that person, and reach out to your colleague to ask for warm introduction.

LinkedIn is also integrating Enterprise Edition with its PointDrive tool, which gives salespeople the ability to give prospects more content through a desktop or mobile app instead of an email larded with attachments, giving reps visibility into how the materials are being consumed. 

Perhaps the most telling piece of news for the longer-term is LinkedIn Enterprise's enhanced CRM integration. Its CRM Sync function will log Sales Navigator activities into CRM systems with a single click. This capability will be available for Salesforce first, not Dynamics CRM, although support is coming for other platforms this year. 

LinkedIn Enterprise also includes CRM Widgets, which enable users to view Sales Navigator profile details within CRM systems. There are widgets for Salesforce and Dynamics now, with ones for Oracle, NetSuite, SugarCRM, Hubspot, SAP Hybris and Zoho coming soon.

Analysis: No Walled Garden Here, But One Caution Abounds

Salesforce CEO Marc Benioff, who was outbid for LinkedIn by Microsoft, complained last year to regulators, alleging that Redmond would close off third-party access to LinkedIn's vast and valuable store of business data in favor of Dynamics CRM. The new integration points for LinkedIn Enterprise Edition suggest that on the contrary, Microsoft sees plenty of money in integrating LinkedIn with competing CRMs. Constellation believes this is a good approach not only for Microsoft but for all customers, as the potential value of alignment of CRM with LinkedIn still has plenty of runway. 

But the new TeamLink feature shows Microsoft clearly wants to see how much value it can squeeze out of LinkedIn's data pool by leveraging its social graph. There are some challenges here, says Constellation Research VP and principal analyst Cindy Zhou.

One concern is with how organizations will handle the opt-in to share contacts. The less employees do, the less effective TeamLink becomes, she notes. There's also potential for spamming. "Organizations using TeamLink will need to be aware of the responsibility to properly train users so they don't abuse this additional access to connections," she says. "Ultimately. the connections didn't 'opt-in' for their information to used by a broader enterprise sales team."

24/7 Access to Constellation Insights
Subscribe today for unrestricted access to expert analyst views on breaking news.

Marketing Transformation Next-Generation Customer Experience

Digital Business Distributed Business and Technology Models Part Two; The Dynamic Infrastructure

Digital Business Distributed Business and Technology Models Part Two; The Dynamic Infrastructure

The Digital Business model with its dynamic adaptive capabilities to react to events with intelligently orchestrated responses forms from Services requires a very different enabling infrastructure to that of current Enterprise IT systems. As the Enterprise itself decentralizes into fast moving agile operating entities operating under an OpEx (costs allocated to actual use) management model so must the Infrastructure support with similar functional structure.

The Technology that creates and supports Digital Business does not resemble that deployed in support of Enterprise Client-Server IT systems. Neither is it a rehash of the standard Internet Web architecture. Instead a combination of Cloud Technology, both at the center and increasingly the edge, running Apps, in the form of Distributed Apps, linked by massive scale IoT interactions, and increasingly various forms of AI intelligent reactions represent a wholly different proposition.

In existing Enterprise IT the arrangement and integration of the technology complexities is defined by Enterprise Architecture, the term has not been used above deliberately to highlight the difference. In contrast with the enclosed, defined Enterprise IT environment where it is necessary to determine the relationship between the finite number technology elements; a true Digital Enterprise operates dynamically between an infinite numbers of technology elements, internally and externally.

Enterprise IT, for the most part, supports Client-Server applications, as evidenced in ERP, and is focused on ensuring the outcomes of all transactions will maintain the common State of all data. To do this the dependencies of all technology elements have to be identified in advance and integrated in fixed close-coupled relationships. It is important to remember that Enterprise Architecture was developed to deploy the Enterprise Business model defined by Business Process RE-engineering, (BPR).

It is vital to recognize that the Enterprise Business model and the Technology model are, or should be, two sides of the same coin coherently working together to enable the Enterprise to compete in its chosen market and manner. The introduction of a Digital Business model introduces a completely different set of technology requirements, and importantly requires to reverse accepted IT Architecture by requiring Stateless, Loose coupled, orchestrations to support Distributed Environments.

These simple statements cover some very complicated issues, and before going further the three important issues should be identified and clarified within the context used here;

  1. Stateful means the computer or program keeps track of the state of interaction, usually by setting values in a storage field designated for that purpose. Stateless means there is no record of previous interactions and each interaction request has to be handled based entirely on information that comes with it. Reference http://whatis.techtarget.com/definition/stateless
  2. Tightly-Coupled…hardware and software are not only linked together, but are also dependent upon each other. In a tightly coupled system where multiple systems share a workload, the entire system usually would need to be powered down to fix a major hardware problem, not just the single system with the issue. Loosely-Coupled describes how multiple computer systems, even those using incompatible technologies, can be joined together for transactions, regardless of hardware, software and other functional components. References http://www.webopedia.com/TERM/T/tight_coupling.html http://www.webopedia.com/TERM/L/loose_coupling.html
  3. Digital Business is the creation of new business designs by blurring the digital and physical worlds. ... in an unprecedented convergence of people, business, and things that disrupts existing business models. Reference https://www.i-scoop.eu/digital-business .

Clearly there is a need for something to act as an equivalent for the Enterprise Architecture, and indeed there is no shortage of activities to create ‘Architectural’ Models for IoT. There is a fundamental challenge in the sheer width of what constitutes a Digital Market connected through IoT in different industry sectors. Though it might seem that the approach for a Smart Home is not likely to have much in common with Self-driving cars, other than both being part of a Smart City, at the level of the supporting infrastructure there are minimal differences.

The result is an over whelming abundance of standards bodies, technology protocols and architectural models that will in the short term confuse rather than assist. A read through the listings covering each of those areas here will prove this point. Whilst there is no doubt the devil is in the detail and these things matter IoT deployments should be driven from the Digital Business model outlined in the previous blog post.

A blog is not the format to examine this topic in detail; instead the aim is to provide an overall understanding of a workable approach. And to make use of the views and solution sets available from leading Technology vendors to provide greater detail. The manner of breaking down the ‘architecture’ into abstracted four conceptual layers illustrated below matches almost exactly with the Technology vendors own focus points.

Enterprise Architecture methodologies start with a conceptual stage; an approach designed to provide clarification of the overall solution and outcome. This is necessary to avoid the distraction of the specific products details, often introducing unwelcome dependencies, at the first stage of the shaping the solution/outcome vision.

The four layers illustrated correspond to the major conceptual abstractions present in building, deploying, and operating the necessary Technology model for a Digital Business. This blog focuses on the Dynamic Infrastructure and each of the following blogs in the series will focus on one of the abstracted layers.

The following concentrates on the role and in particular on Enterprise owned and operated infrastructure. The same basic functionality could be provided from a Cloud Services operator. There are significant issues around latency and risks in certain areas, such as ‘real time’ machinery operations as an example, that will lead to the selection of on-premises Dynamic Infrastructure capability. It is most likely that a mix of external and internal Dynamic Infrastructure will be deployed in most Enterprises with the Distributed Services Technology Management layer providing the necessary cohesive integration. A point made in the following Part 3b of this series.

The Dynamic Infrastructure shares many of the core traits of Internet and Cloud Technology in providing capacity, as and when required, in response to demands. The development of the detailed specification started in 2012 with the publication by Cisco of a white paper calling for a new model of distributed Cloud processing across a network. Entitled ‘Fog Computing’ this concept became increasingly important with the development of IoT redefining requirements.

In November 2015 a group of leading Industry Venders, (ARM, Cisco, Dell, Intel and Microsoft), founded the OpenFog Consortium. Today there are 56 members including a strong representation from the Telecoms Industry. Cisco has developed its products and strategy in tune with the vision statement of the OpenFog Consortium that states the requirement to be;

“Fog computing is a system-level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the continuum from Cloud to Things. By extending the cloud to be closer to the things that produce and act on IoT data, fog enables latency sensitive computing to be performed in proximity to the sensors, resulting in more efficient network bandwidth and more functional and efficient IoT solutions. Fog computing also offers greater business agility through deeper and faster insights, increased security and lower operating expenses”

It is worth pointing out there are subtle, but important, differences between Fog Computing, and pure Edge Based Cloud Computing. Edge Based solutions more closely resemble a series of closed activity pools with relatively self contained computational requirements, where as Fog Computing processing is more interactive and distributed using a greater degree of high level service management from the Network. Naturally the two definitions overlap and this together with other terms can be confusing. In practice, it is important to note that “Fog” certainly includes “Edge”, but the term Edge is often used indicate a more standalone functionality.

Three Technology vendors have focused their products and solution capabilities around providing such an infrastructure, with its mix of connectivity and processing triggered by a sophisticated management capability. Each vendor uses different terminology and has published their definitions on what they identify as the challenges and requirements.

Constellation Research would like to thank Cisco, Dell and HPE for contributing the following overviews that describe their point of view in respect of building and operating a Dynamic Infrastructure. Each vender also provided links to enable a more detailed evaluation to be made of their approach and products.

 

Cisco’s Digital Network Architecture

At Cisco, we are changing how networks operate into an extensible, software driven model that makes networks simpler and deployments easy. Customer requirements for digital transformation go beyond technology such as IOT and require that the network can handle changes, security, and performance in a policy-based manner designed around the application and business need.

Network Architecture is the framework for that network change moving from a highly resources intensive and time consuming way of deploying network services and segments to a model that is built to speed these processes and reduce cost. With DNA, we are focusing on automating, analyzing securing and virtualization of network functions. Networks need to be more than just a utility, they need to be business driving and secure in the proactive and reactive sense. To do this Cisco is building on our industry leading security products combined with our industry leading access products (including SD-WAN, wireless, and switching) we are helping customers change how they fundamentally work and to embrace the digital transformation.

Some examples of our continued innovation in this space include products like APIC-EM, the central engine of our Cisco DNA. APIC-EM delivers software-defined networking capabilities with policy and a simple user interface. It offers Cisco Intelligent WAN, Plug and Play for deploying Cisco enterprise routers, switches, and wireless controllers, Path Trace for easy trouble shooting, and Cisco Enterprise Service Automation.

Cisco is more than a networking vendor, we partner with our customers at all levels. We strive to understand not only what customers need at a technical and IT level, but what they need as a business. Cisco brings consistent and long term investment into its products and services, adding value and features constantly. Nobody in the networking market invests in R&D and listens to customers like Cisco does. Cisco knows that the changing face of IT is to help bridge the gap to cloud and make sure that business needs are met with agile solutions that enhance the business. With Cisco DNA, CIOs, managers, and administrator all get what they need to move forward with digital transformation and IOT.

The details of the Cisco range of products, and solutions, can be found in in three places; One, Two, Three

 

Dell Technologies Internet of Things Infrastructure

With the industry’s broadest IoT infrastructure portfolio together with a rapidly growing ecosystem of curated technology and services partners, Dell Technologies cuts through the complexity and enables you to access everything you need to deploy an optimized IoT solution from edge to core to cloud. By working with Dell’s infrastructure and curated partners they also provide proven use-case specific solution blueprints to help you achieve faster ROI. Dell has strong credibility to play in Industrial IoT from its origins in the supply of computing to the Industrial sector, as an early leader in sensor-driven automation, and through the EMC acquisition, which adds additional expertise in storage, virtualization, cloud-native technologies, and security and system management. Further, Dell Technologies is leading multiple open source initiatives to facilitate interoperability and scale in the market since getting access to the myriad data generated by sensors, devices, and equipment is currently slowing down IoT deployments.”

The challenge with IoT is to securely and efficiently capture massive amounts of data for analytics and actionable insights to improve your business. Dell Technologies enables the flexibility to architect an IoT ecosystem appropriate for your specific business case with analytics, compute, and storage distributed where you need it from the network’s edge to the cloud.

Part of Dell’s net-new investment in IoT is a portfolio of purpose-built Edge gateways with specific I/O, form factor and environmental specifications to connect the unconnected capturing data from a wide variety of sensors and equipment. The Dell Edge Gateway line offers processing capabilities to start the analytics process to cleanse the data as well as comprehensive connectivity to ensure that the critical data can be integrated into digital business systems where insights can be created and business value generated. These gateways also offer integrated tools for both Windows and Linux operating systems to ensure that the distributed architecture can be secured and managed. Reference here

Further, Dell EMC empowers organizations to transform business with IoT as part of the digitization initiative. The Dell EMC’s converged solution including Vblock Systems, VxRack Systems, VxRail Systems, PowerEdge and other Dell EMC products are prevalent in the core data centers for enterprise applications, big data and video management software (VMS) as well as for cloud native applications. Dell simplifies how businesses can tap IoT as part of their digital assets — from edge with Dell’s Edge Gateways tied to sensors and operational technology to core data center and hybrid cloud from Dell EMC plays an crucial role for blending historical and real-time analytics, processing and archival. The Dell EMC Native Hybrid Cloud Platform, a turnkey digital platform accelerates time to value by simplifying the use of in IoT as part of cloud native app deployment. Included in this portfolio is the Analytic Insights Module, a fully-engineered solution providing self-service data analytics with cloud-native application development into a single hybrid cloud platform, eliminating the months it takes to build your own.

The details of Dell range of products, and solutions, can be found here

 

HPE’s Hybrid IT

HPE believes that there are a number of dimensions to dynamic infrastructure. It is estimated that 40-45% of IoT data processing will occur “at the edge” - close to where the sensors and actuators are. This is why they have created their “EdgeLine” range of edge compute devices. HPE calls this the first dimension of Hybrid IT - getting the right mix of edge and core compute.

While “real-time” processing of IoT data will occur both at the edge and at the core, “deep analytics” like design simulations and deep learning that a digital world requires may need specialised computers because Moore’s law is running out of steam. HPE believes, another dimension to Hybrid IT is the mix of conventional versus specialised compute. HPE’s specialised compute includes their SuperDome and the SGI ranges.

Digitiziation is forcing a change in the architecture of applications. Gone are the three tier, web client to app server to database applications. These are replaced by application and service meshes - meshes of services that applications can call. This is why micro-service and containers are becoming so popular (Docker has been downloaded over 4 billion times, for example). HPE built its Synergy servers with this new application architecture in mind:

  • CPU, storage and fabric can be treated as independently scalable resource pools. This scaling can be applied to both physical infrastructure (for containers running directly on top of the hardware) and virtual machines.
  • Infrastructure desired state can be specified in code. This allows the infrastructure on which an application is run to put under source control with the source code
  • Because containers carry their required infrastructure specification with them, this specification can be given directly to the Synergy server for provisioning before containers are layered on top

Full details on HPE Infrastructure products can be found here.

 

Addendum

A distributed system is a model in which components located on networked computers communicate and coordinate their actions by passing messages.[1] The components interact with each other in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components. Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers, which communicate with each other by message passing. https://en.wikipedia.org/wiki/Distributed_computing

A Smart System is a distributed, collaborative group of connected Devices and Services that react to a continuous dynamic changing condition by invoking individual, or groups, of Smart Services to deliver optimized outcomes. The term originated in industrial automation and therefore the current Wikipedia definition seems somewhat limited in its scope when compared to the wider IoT use of the term.

New C-Suite Innovation & Product-led Growth Tech Optimization Future of Work AI ML Machine Learning LLMs Agentic AI Generative AI Analytics Automation B2B B2C CX EX Employee Experience HR HCM business Marketing SaaS PaaS IaaS Supply Chain Growth Cloud Digital Transformation Disruptive Technology eCommerce Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP Leadership finance Customer Service Content Management Collaboration M&A Enterprise Service Chief Information Officer Chief Technology Officer Chief Digital Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Executive Officer Chief Operating Officer

Exposure of Australian Officials' Private Phone Numbers Highlights Security's 'Human Error' Factor

Exposure of Australian Officials' Private Phone Numbers Highlights Security's 'Human Error' Factor

Constellation Insights

Earlier this month, it emerged that a major Amazon Web Services outage was caused by an engineer making a typo while debugging a system. While not the same thing, the accidental exposure of hundreds of Australian politicians and staffers' private mobile phone numbers serves as another reminder that when it comes to security, human error can trump any number of technological measures. The Sydney Morning Herald has the details:

The Department of Parliamentary Services failed to properly delete the numbers before it published the most recent round of politicians' phone bills on the Parliament House website, potentially compromising the privacy and security of MPs from cabinet ministers down.

While in previous years the numbers were taken out of the PDF documents altogether, this time it appears the font was merely turned white - meaning they could still be accessed using copy and paste.

The only numbers absent were those of the very top cabinet ministers including Prime Minister Malcolm Turnbull, Treasurer Scott Morrison, Attorney-General George Brandis and a handful of others.

The department has blamed a private contractor, TELCO Management, for the stuff-up. 

DPS officials have since deleted the private numbers after receiving word about them from the newspaper.

"I really wish we were all a bit more self-conscious about this style of error," says Constellation Research VP and principal analyst Steve Wilson. "We have a host of office tools which are incredibly rigid when you think about it. Our computers are wretchedly unforgiving. 

"In this latest case, someone has deleted some sensitive data in a file, or they thought they had deleted it, but no, the data was still there, hidden, and it cropped up again when the file was moved to a public location," Wilson adds. 

As it happens, the Australian government is becoming a bit notorious for this kind of thing. Other recent episodes include the release of passport details of 20 or so visiting heads of state, Wilson notes. And worse, the inadvertent publication of names and addresses and other details of 10,000 refugee asylum seekers, many of whom were in personal danger in their countries of origin. "Are we just too laid back down under?" he says.

The truth is that these are the "sorts of mistakes anyone without a master's degree in computing might make," Wilson adds. "Computers are like nitroglycerine. They're kind of safe if you're unnaturally careful in the way you handle them."

Moreover, when correcting a security breach it's crucial to consider other ways compromised data may still be exposed. The website Junkee found that even after the DPS deleted the phone bills, copies of them remained available in Google's cache and the numbers were actually openly visible. They've since been removed from Google's servers.

24/7 Access to Constellation Insights
Subscribe today for unrestricted access to expert analyst views on breaking news.

Digital Safety, Privacy & Cybersecurity Chief Information Officer

IBM Delivers 'First Enterprise-Ready' Blockchain Service

IBM Delivers 'First Enterprise-Ready' Blockchain Service

Constellation Insights

IBM has claimed it has the lead in the increasingly competitive blockchain and distributed ledger technology market, saying the new release of its IBM Blockchain service is the first "enterprise-ready" one of its kind. 

Like other companies, IBM is basing its blockchain on Hyperledger, the open-source project hosted at the Linux Foundation. Recently, Hyperledger Fabric version 1.0 was promoted from the Foundation's incubator program to "active" status. The v1.0 release is expected to be released shortly. 

Big Blue is one of the biggest and most active contributors to Hyperledger, having contributed key code to the effort that produced the incubation project. Since then, the diversity of contributors to Hyperledger has reached 45 percent, after starting out with nearly none, according to the Linux Foundation.

IBM made its announcement in conjunction with the Interconnect conference in Las Vegas. IBM's revenue model for its blockchain centers on offering secure hosting and consulting services around the software. Hyperledger isn't the only major effort of its kind; others include R3 and Ethereum. 

It's clear that IBM will emphasize its security expertise as a differentiator, judging from this passage in its press release:

Many think blockchain is an inherently safe technology, but blockchain networks are only as safe as the infrastructures on which they reside. IBM's High Security Business Network offers the world’s most secure Linux infrastructure that integrates security from the hardware up through the software stack.

The network's security measures include defenses for insider attacks; the highest level of customer system isolation, which is crucial fo highly regulated industries; the use of secure containers to protect blockchain code; specialized hardware modules for storing cryptographic keys; and extensive log data for auditing purposes. 

In addition, IBM announced a set of governance tools for setting up and overseeing blockchain networks, as well as Fabric Composer, open-source developer tools that the company says can automate tasks that previously could take weeks.

Although opinions vary about whether blockchain is overhyped, IBM's announcement is noteworthy. 

"There has been a wide range of comment as to whether or not blockchain is a viable technology that can be developed to support decentralized, distributed commercial settlements in the digital economy," says Constellation Research VP and principal analyst Andy Mulholland. "Cetainly the requirement is very real, and as a result a lot of startups are offering various forms of small-scale variations. With its approach to its blockchain service, IBM has taken its characteristic role in providing core technology to support the open source community, with the expectation that harnessing many skilled individuals will result in speeding up meaningful progress. It's a bold move, and one to keep a watching eye upon as a core component of the expansion of the digital economy and IoT."

24/7 Access to Constellation Insights
Subscribe today for unrestricted access to expert analyst views on breaking news.

Matrix Commerce Tech Optimization Digital Safety, Privacy & Cybersecurity Chief Customer Officer Chief Financial Officer Chief Information Officer

ADP MOTM 2017 Event Report

ADP MOTM 2017 Event Report

We had the opportunity to attend ADP’s Meeting of the Mind (MOTM) conference in San Diego, held March 19th till 22nd 2017, and the Manchester Grand Hyatt. The conference was well attended with over 1000 attendees, good partner representation and influencer selection.


So take a look at my musings on the event here: (if the video doesn’t show up, check here)
 
 

No time to watch – here is the 1-2 slide condensation (if the slide doesn’t show up, check here):
 
Want to read on? Always tough to pick the takeaways – but here are my Top 3:

ADP has Momentum – For a long time it seemed as if ADP was stuck and not moving beyond payroll. 4-5 years ago it became clear this would be changing and we reported on the changes from different MOTM (2014, 2015, 2016). By now it is clear that the investments done into innovation centers, new platforms, a new user experience, big data etc. are now paying off. 10 million users of ADP’s mobile app are telling a good adoption story, adding 300k per month is impressive growth. The number of Vantage clients is now North of 550. And the big data investments around the ADP DataCloud are paying off as ADP can now launch products on top of the platform, like at this MOTM the ADP Pay Equity Explorer (see next).
 
ADP Meeting of the Minds MOTM Holger Mueller Constellation Research
Rodriguez shares ADP response to today's Workforce Challenges
New Apps powered by ADP Data Cloud – Two years ago, ADP launched its big data offering, the ADP DataCloud. The first focus was on not so flashy, but essential reporting needs for ADP customers. But by now ADP is using its in depth knowledge of payroll and salary data to build new applications. The newly launched Pay Equity Explorer allows the ad hoc analysis of payment disparity between e.g. gender and ethnicity. But not only the gap is identified, ADP can pull in performance data (when using ADP Talent Management products, but my assumption here) and market data. So HR professionals don’t have to spend nights and weekends to put a highlight and analysis on the issue of pay disparity, they can also correct the issue in a competent and efficient matter. A good example of the benefits vendors can create for their customers once they have moved to a big data platform. 
 
ADP Meeting of the Minds MOTM Holger Mueller Constellation Research
Camby announces Pay Equity Explorer
More Services Focus – ADP has always been about services, but comparing to the last 3 MOTMs there was more services messaging at MOTM this year – starting with CEO Rodriguez, President Flynn – all the way to product presentation with Ghauri. I asked all if something has changed, and all reiterated that not really – customers care about the services. We learnt that ADP is opening three more services centers, so certainly the vendor is doubling down in the direction. 
 
ADP Meeting of the Minds MOTM Holger Mueller Constellation Research
Demo of ADP Pay Equity Explorer
The TMBC Opportunity – ADP (surprisingly) acquired The Markus Buckingham Company in January (see here for my commentary). Buckingham was at hand with a passionate keynote on addressing the performance management malaise in the overall client base with empowering the team leaders (no surprise – TMBC Standout was built for that). With TMBC Standout ADP has a unique talent management creature (from a product stand point perspective) – that it needs to position right in the crowded engagement, survey, performance management etc. market. It’s only two months, but a key area to watch, stay tuned. 
 
ADP Meeting of the Minds MOTM Holger Mueller Constellation Research
Buckingham on Performance

MyPOV

A good MOTM for ADP, moving in the right direction on many fronts. Customers are adopting new and old solutions quickly, an encouraging sign. No surprise the high ground for ADP remains around payroll – and related offerings – like the mobile application and pay disparity analysts which are / are going to be of high interest in the customer base. At the same time the adoption of these products propels customers to the new ADP platforms, so behind the scenes there is some heavy lifting happening, The fact that there is no noise and reports on any issues around these behind the scenes migrations, is a major accomplishment by ADP as a vendor, and of course of substantial value for its customers. The TMBC acquisition is a new opportunity for ADP for differentiation, but we have to stay tuned on how ADP will position Standout.

On the concern side, ADP may not be moving fast enough. Payroll is super sticky, the average attendance at MOTM was 14 years, few vendors can show that. But ADP should move aggressively into domains like Machine Learning (it certainly can, given it has a big data Cloud), it needs to innovate on user experienced (e.g. speech as the new UI) and sell more Talent Management to customers. To be fair I left MOTM on Monday, so stay tuned for more messages, announcements that may address these areas.

But for now, life is good for ADP customers, what a difference 5 years can make. Stay tuned for what’s ahead.

Want to learn more? Checkout the Storify collection below (if it doesn’t show up – check here).

Find more coverage on the Constellation Research website here and checkout my magazine on Flipboard and my YouTube channel here.
Future of Work Innovation & Product-led Growth Event Report Executive Events Chief People Officer

Interaction: GroupM’s Take on Digital

Interaction: GroupM’s Take on Digital

1
 

I am always interested to see how different lenses on the same subject reveal insights. For example, B2B marketers have a particular skew when it comes to digital and social media – it is hard edged, data driven and technology enabled. This is particularly true for large scale tech companies – but is an approach that has been resonating across industries for some time. B2C marketing, on the other hand, operates in a high velocity world that can turn on a tweet – responsiveness is no longer just a customer service issue but one that impacts the entire value chain.

We are, after all, ever closer to our customers than ever before.

Social and digital media, however, often feels like it operates in a bubble. An ever-increasing bubble it seems, but a bubble nonetheless. When I watch Gruen, for example, I struggle to recall even the most popular or widely discussed TV commercials shown – my habits have now been so deeply skewed by on-demand viewing and timeshifting that TV by timetable seems so last century.

But this is merely the bubble that we choose. The lens that we select.

And there are movements and trends that continue in their own parallel universe that operate at different speeds.

The GroupM Interaction 2017 report is interesting particularly because it applies a media lens across everything from ecommerce to fake news, television to bandwidth. I particularly like the section on privacy and the impacts that widespread security breaches are having on consumers’ sense of trust.

The report identifies four creative challenges facing both brands and agencies:

  1. Getting the attention of the consumer in a low attention world. As the buyer pushes the seller towards viewability, the consumer is pushing the brand to greater ‘watchability.’
  2. Meeting the costs and measurement implications of the constant iterations of formats and functionality.
  3. Finding the balance of enough variation to meet the needs of ever finer segments without undermining the overall brand proposition. (The Marriott Hotel Bogota has 57 images on Expedia.com. Marriott / Starwood operates over 7000 properties. That’s a lot of images.)
  4. The creation of new classes of content for e commerce environments.

While I can agree on the surface with these challenges, I wonder really whether our attention spans truly are shrinking – do we really have the attention span of a goldfish? And if this is not true, what does this mean for the remaining three challenges?

I have a sense that we are consuming ever-larger volumes of media each and every day – but it’s not necessarily in the format and channel that lends itself to the kind of tracking and measurement that business clients have come to expect.

A recent article from BBC Health questions the notion of the shrinking attention span by unearthing the starting point for this theory – a Microsoft report referencing the Statistic Brain website. Apparently there is no evidence pointing towards a shrinking attention span, nor support for the widely held view that goldfish have attention spans. In fact, Dr Gemma Briggs from Open University suggests that attention is entirely contextual – ”How we apply our attention to different tasks depends very much about what the individual brings to that situation”.

And that brings us back to the question of lenses and touches on the topic of Fake News – a subject also covered in the GroupM report. One of the suggestions in the report points towards the emergence of a “purpose driven media” and an incentive structure created to drive this:

The most shared and most monetized stories come from authentic news sources. A way of decreasing the incentives to the bad guys is to increase the incentives to the good guys. A simple adjustment in the revenue sharing model would go a long way.

And that’s where the future of media becomes extremely interesting. Given the emergence of organisations like Sleeping Giants, a purpose driven media may be a necessary development to help restore trust, authenticity and – dare I say it – respect in the media and advertising industry.

Download the Interaction 2017 report here.

New York SportsBiz Networking Event on May 4th

New York SportsBiz Networking Event on May 4th

1