Results

Domo Climbs Enterprise Ladder in Cloud Business Intelligence

Domo Climbs Enterprise Ladder in Cloud Business Intelligence

Domo has graduated from analytics startup to enterprise contender, breaking new ground in cloud-scale deployments. Here’s a look inside the fast-growing company.

Domo is enterprise ready. That’s the key takeaway Domo wanted to project at Domopalooza, held March 22-23 in Salt Lake City. The event drew more than 3,000 attendees and saw keynote appearances from enterprise-scale customers including Target, GE Digital, UnitedHealth Group and Univision.

Domo has surpassed the 1,000-customer mark, and more than half of its revenue now comes from $1 billion-revenue-plus enterprise customers, according to company executives. At Domopalooza, CEO and founder Josh James announced that the company has reached a $120-million-annual-revenue run rate. That’s a fraction of the $827 million in revenue rival Tableau reported in 2016 and a rounding error compared to Microsoft’s revenue (although that company doesn’t break out PowerBI revenue, which is Domo’s competition). Nonetheless, given Domo’s claimed 100% growth rate and the list of enthused customers at Domopalooza, it’s time for a closer look.

Domo CEO Josh James focused mostly on interviewing big customers during
his keynote time at Domopalooza 2017.

Previously co-founder and CEO of Omniture, James founded Domo in 2010 shortly after selling his old company to Adobe for $1.8 billion. The Domo executive team is loaded with Omniture veterans, and they tell the story that James came up with the idea for Domo because he was so frustrated with the incumbent tools available for business insight when he was the CEO at Omniture. The intent was to build an agile, cloud-based analytics platform in the mold of Omniture, but designed to handle diverse business data sources (beyond the Web, mobile and social data analyzed in Omniture).

Domo runs in Amazon’s cloud, and it includes components to capture, prepare and visualize data and then engage in collaboration and optimize business decisions. The platform’s back-end data warehouse, the Domo Business Cloud, scales up at cloud speed and handles diverse data sources, including semi-structured and sparse data. At Domopalooza the company announced that its data store has surpassed 26 petabytes, making it the largest cloud-based analytical data store of its kind, according to James.

Domo’s largest customer, Target, spoke to the platform’s scalability during a keynote interview. Target loads data on every item and every transaction from about 2,000 stores into the Domo Business Cloud at 15-minute increments, explained Ben Schein, Director of BI & Analytics. Where store-level reporting was previously updated once a week, Schein said Domo brings near-real-time insight into store operations, purchasing and stocking trends to 1,500 to 1,700 users per week. James acknowledged that Target helped Domo learn how to scale, harden and mature its platform.

To capture data, Domo has created more than 400 pre-built connectors to popular data sources. A data-transformation tool called Magic lets users join and blend data through a drag-and-drop interface. The company partners with data-integration vendors like Informatica and Talend to support more sophisticated ETL work.

Domo’s front-end data-analysis environment combines pages, cards and applications. Pages are analogous to dashboards and cards are individual visual analyses. Pages and cards are mobile first, meaning you build them once and they dynamically render for phone, tablet or desktop viewing. The company has more than 1,000 applications, which are pre-built, but customizable visual analyses, such as a Social Index app for benchmarking brand popularity and net promoter scores or the Sales Forecast app, which measures predictions against actuals and quotas, with drill-down analysis of rep and manager performance. There’s also a SQL-like “Beast Mode” that enables power users to develop custom transformations and analyses.

Ease of deployment and administration are big selling points. The back end is entirely managed by Domo. When you add more data or more challenging analyses, Domo adds storage and compute nodes automatically. Pricing is based entirely on the number of users, not storage or compute capacity. The pricing model is designed to encourage customers to load more data and build more cards and pages.

Agile analysis is another selling point. At the event, a GE Digital executive showed off a company-wide performance dashboard she built “in one day,” complete with slick, graphical formatting created through an Adobe Illustrator plug-in to Domo. A merchandising executive from Target described how her team reviewed a prototype dashboard in the morning and got back a revise with all requested changes by the end of that day. And an executive from Sephora Southeast Asia said her company got started with Domo late last fall and had a dashboard available within two weeks — just in time for Black Friday performance analysis.

Domopalooza saw four key product announcements:

Analyzer upgrades. The Analyzer is where users do their slicing, dicing and page and card building. Top announcements here included a Data Lineage Inspector that shows where the data used in an analysis comes from and how it was transformed or altered. Data “slicer” buttons can now be added to cards to support guided analysis to the most sought-after views of data. And a new period-over-period analysis feature supports time-based comparisons that previously required Beast Mode customization.

Business-in-a-Box. This collection of pre-built, role-based dashboards is designed to support rapid delivery of the most-asked-for insights across sales, marketing, finance, operations, IT and other business functions. It’s set for release this spring.

Domo Everywhere. Also due this spring, Domo Everywhere is the company’s entry into embedded analytics, white-label licensing and publishing. The offering provides ways for customers to make Domo analytics available within their own software, through Web services or on websites under their own brand.

Mr. Roboto. Attendees got a sneak peek at a few of the advanced analytics, machine learning and natural language understanding capabilities of this offering. I was told it will be a layer of capabilities  within the platform, not a bolt-on module. Release dates weren’t offered, so it’s not something I expect to see fleshed out until late 2017 or perhaps Domopalooza 2018.

@Domotalk, #Analytics, #BusinessIntelligence, #DP17

Domo customers shared wished-for feature ideas during Domopalooza’s
open-mike closing session. Audience members raised hands (and Domo
execs guessed percentages) to express their interest in each feature.

MyPOV on Domo’s Course  

I came away from Domopalooza impressed by the scale of Domo’s largest deployments and the enthusiasm of its customers. A highlight of the event was the closing general session, during which Domo previewed coming new features and then turned the mike over to customers to share feature requests. Each request was briefly discussed in a back-and-forth with Domo executives. The request was then listed on a slide (see photo above) for all to see and the audience was then asked to show their interest by a show of hands (or clapping or hoots and hollers). I’ve seen these sorts of sessions at other events, but you don’t see companies with poor customer satisfaction doing it for fear of initiating a bitch fest.

The turning point that Domo is now navigating is the same one that the likes of Tableau and Qlik ran into a few years ago, namely enterprise-grade maturity. Domo execs acknowledged that they’re now facing demands from IT for governance features and administrative capabilities for managing many users.

The Data Lineage Inspector, for example, is just a start on the data governance capabilities customers want. During the closing general session a customer asked for a card-certification feature whereby analyses could have a visual check mark or seal of approval indicating certified status. The indicator would automatically change if data sources or analyses were altered. Domo execs said they are working on such a governance scheme, and by a show of hands there was keen interest (approximated at 91% by Domo's chief product officer and session leader, Catherine Wong).

Domo offers three hybrid deployment options for large or regulated customers that don’t want to put everything in the public cloud. A Federated Query feature that’s very new will let customers query data in place, but performance is dependent upon the bandwidth of the connections and the compute power of each source. A second option let’s companies put the Domo Business Cloud data layer behind the corporate firewall while leaving the analysis layer in the cloud. A third option is running the entire Domo platform as a dedicated instance on AWS or Azure or as a private cloud instance behind a corporate firewall.

Domo is facing these enterprise challenges earlier in its lifecycle than did some of its competitors. That’s partly due to the maturation of the market and partly due to the experience of the Domo team rooted in Omniture. The upshot is that Domo is maturing quickly and punching above its actual weight.


Media Name: Domo Customer wish list.jpg
Media Name: Domo CEO Josh James.jpg
Data to Decisions Marketing Transformation ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Customer Officer Chief Information Officer Chief Marketing Officer Chief Digital Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Down Report – Power failure takes Azure services down - 3 Cloud Load Toads

Down Report – Power failure takes Azure services down - 3 Cloud Load Toads

 
We continue our series of IaaS downtimes – and other availability issues, see our Down Report on the recent AWS downtime here.

 
 
 

Kudos to Microsoft to share issue, impact, impact on customers, workaround, root cause mitigation and next steps on the Azure Status History (see here)
 
 
So let’s dissect the information available in our customary style:
RCA - Storage Availability in East US
Summary of impact: Beginning at 22:19 UTC Mar 15 2017, due to a power event, a subset of customers using Storage in the East US region may have experienced errors and timeouts while accessing storage accounts or resources dependent upon the impacted Storage scale unit. As a part of standard monitoring, Azure engineering received alerts for availability drops for a single East US Storage scale unit. Additionally, data center facility teams received power supply failure alerts which were impacting a limited portion of the East US region. Facility teams engaged electrical engineers who were able to isolate the area of the incident and restored power to critical infrastructure and systems. Power was restored using safe power recovery procedures, one rack at time, to maintain data integrity. Infrastructure services started recovery around 0:42 UTC Mar 16 2017. 25% of impacted racks had been recovered at 02:53 UTC Mar 16 2017. Software Load Balancing (SLB) services were able to establish a quorum at 05:03 UTC Mar 16 2017. At that moment, approximately 90% of impacted racks were powered on successfully and recovered. Storage and all storage dependent services recovered successfully by 08:32 UTC Mar 16 2017. Azure team notified customers who had experienced residual impacts with Virtual Machines after mitigation to assist with recovery.

MyPOV – Good summary of what happened, a power failure / power event. Good to see that customers were notified. Power events can always be tricky to recover, and it looks like Azure management erred on the side of caution bringing up services rack by rack and then adding services like SLB later. But the downtime for affected customers was long, best case – when in the first 25% of racks was almost four hours, and worst case 10 hours+. Remarkable it took Azure technicians 2 hours 20 or so minutes to get the power back. Microsoft needs to (and say it will) review power restore capabilities and find ways to bring storage back quicker. Luckily for customers and Microsoft this happened over night, with possibly lesser effect on customers… but that said we don’t know what kind of load was run on the infrastructure.

Rating: 3 Cloud Load Toads


 
Customer impact: A subset of customers using Storage in the East US region may have experienced errors and timeouts while accessing their storage account in a single Storage scale unit. Virtual Machines with VHDs hosted in this scale unit shutdown as expected during this incident and had to restart at recovery. Customers may have also experienced the following:
- Azure SQL Database: approx. 1.5% customers in East US region may have seen failures while accessing SQL Database.
- Azure Redis Cache: approx. 5% of the caches in this region experienced availability loss.
- Event Hub: approx. 1.1% of customers in East US region have experienced intermittent unavailability.
- Service Bus: this incident affected the Premium SKU of Service Bus messaging service. 0.8% of Service Bus premium messaging resources (queues, topics) in the East US region were intermittently unavailable.
- Azure Search: approx. 9 % of customers in East US region have experienced unavailability. We are working on making Azure Search services to be resilient to help continue serving without interruptions at this sort of incident in future.
- Azure Site Recovery: approx. 1% of customers in East US region have experienced that their Site Recovery jobs were stuck in restarting state and eventually failed. Azure Site Recovery engineering started these jobs manually after the incident mitigation.
- Azure Backup: Backup operation would have failed during the incident, after the mitigation the next cycle of backup for their Virtual Machine(s) will start automatically at the scheduled time.

MyPOV – Kudos for Microsoft to give insight into the percentage of customers affected. It looks like Azure Storage units are using mixed load – across Azure services. That has pros and cons, e.g. co-location of customer data, mixed averaged load profiles – but also means that a lot of services are affected when a storage unit goes down.

Rating 2 Cloud Load Toads

 
 
Workaround: Virtual Machines using Managed Disks in an Availability Set would have maintained availability during this incident. For further information around Managed Disks, please visit the following sites. For Managed Disks Overview, please visit https://docs.microsoft.com/en-us/azure/storage/storage-managed-disks-overview. For information around how to migrate to Managed Disks, please visit: https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-migrate-to-managed-disks.
- Azure Redis Cache: although caches are region sensitive for latency and throughput, pointing applications to Redis Cache in another region could have provided business continuity.
- Azure SQL database: customers who had SQL Database configured with active geo-replication could have reduced downtime by performing failover to geo-secondary. This would have caused a loss of less than 5 seconds of transactions. Another workaround is to perform a geo-restore, with loss of less than 5 minutes of transactions. Please visit https://azure.microsoft.com/en-us/documentation/articles/sql-database-business-continuity/ for more information on these capabilities.

MyPOV – Good to see Microsoft explaining how customers could have avoided the downtime. But the Managed Disk option only applies VMs affected by the storage. Good to see the Redis Cache option – the question is though, how efficient (and costly) that would have been. Cache synching is chatty and therefore expensive. More importantly good to see the Azure SQL option, that is key for any transactional database system that needs higher availability. Again enterprise will have to balance cost benefits.

More of concern is that the other 4 services affected by the outage seem to have no Azure provided workaround, in case customers needed and would decide implement (and pay for one). No work around for Event Hub and Service Bus would not be a good situation, especially since event and bus infrastructures are used to make systems more resilient. Azure Search seems to lack a workaround, too, affecting customers using those services. It’s not clear what the statistic means though: Was Search itself not available or could the information of the affected storage units not be searched. Important distinction. The Azure Site Recovery affected isn’t good either, but kudos for Microsoft to start those manually. But manual starts can only be a workaround, as they don’t scale, e.g. in a greater outage. The failure of Azure Backup is probably the least severe, but in case of power failures, which may not be contained and might cascade, of equal substantial severity, as customers loose backup capability to protect them from potential further outages.

Rating: 2 cloud load toads (with workaround would be 1 / with no workaround 3 – the maximum, as we don’t have full clarity here, we use 3 as the average).
 
 
Root cause and mitigation: Initial investigation revealed that one of the redundant upstream remote power panels for this storage scale unit experienced a main breaker trip. This was followed by a cascading power interruption as load transferred to remaining sources resulting in power loss to the scale unit including all server racks and the network rack. Data center electricians restored power to the affected infrastructure. A thorough health check was completed after the power was restored, and any suspect or failed components were replaced and isolated. Suspect and failed components are being sent for analysis.

MyPOV – Always ironic how a cheap breaker can affect a lot of business. I am not a power specialist / electrician, but reading this – if one power panel fails and load has to be transferred, the system should still be operating. Maybe something was not considered in the redundant design vs remaining throughput capacity, not a good place to be.

Rating - 5 toads
 
 
Next steps: We are continuously taking steps to improve the Microsoft Azure Platform and our processes to help ensure such incidents do not occur in the future, and in this case it includes (but is not limited to):
- The failed rack power distribution units are being sent off for analysis. Root cause analysis continues with site operations, facility engineers, and equipment manufacturers.
- To further mitigate risk of reoccurrence, site operations teams are evacuating the servers to perform deep Root Cause Analysis to understand the issue
- Review Azure services that were impacted by this incident to help tolerate this sort of incidents to serve services with minimum disruptions by maintaining services resources across multiple scale units or implementing geo-strategy.

MyPOV – Kudos for the hands-on next steps. The key measure (which I am sure Microsoft is doing) is though: How many other storage system power units, or overall Azure power units may have the same issue, and when will they be fixed and have the right capacity / redundancy, so this event cannot repeat. And then we have the question of standardization, is this a local knowledge event, are other data centers setup differently – or the same and can the same incident with a higher certainty be avoided.

Out of curiosity – there was another event in Storage provisioning, a software defect, only 37 minutes before (you can find it on the Azure status page, right below the above incident) … and these two events could / may have been connected. The connection between the two is at hand: When having a storage failure in one location, customers (and IaaS technicians) may scramble to open storage accounts – at the same or other locations, if they cannot, ad hoc needed remediation and workaround cannot happen. There maybe a connection / there may not be a connection. But when hardware goes down and the software to manage accounts for the hardware – that’s an unfortunate – and hopefully highly unlikely – connection of events.

 

(Luckily) a mostly minor event

Unless someone was an affected party - this was a minor cloud down event. But it was luckily only a minor event, as power failures can quickly propagate and create cascading effects. Unfortunately for some of the services, there is no easy or no workaround at all available that when they go down, they are down. Apart from Microsoft's lessons learned - this is the larger concern going forward. I count a total of 12 toads, averaging 3 Cloud Loud Toads for this event.
 
 

Lessons for Cloud Customers

Here are the key aspects for customers to learn from the Azure outage:

Have you built for resilience? Sure, it costs, but all major IaaS providers offer strategies on how to avoid single location / data center failures. Way too many prominent internet properties did not chose to do so – so if ‘born on the web’ properties miss this – its key to check regular enterprises do not miss this. Uptime has a price, make it a rational decision, now is a good time to get budget / investment approved, when warranted and needed.

Ask your IaaS vendor a few questions: Enterprises should not be shy to ask IaaS providers if they have done a few things:
  • How do you test your power system equipment?
  • How much redundancy is in the power system?
  • What are the single points of failure in the data center being used?
  • When have you tested / taken off components of the power system?
  • How do you make sure your power infrastructure remains adequate as you are putting more load through it (assuming the data center gets more utilized).
  • What is the expected up in case of power failure? 
  • How can we code for resilience – and what does it cost?
  • What kind of renumeration / payment / cost relief can be expected with a downtime?
  • What other single point of failure should we be aware of?
  • How do you communicate in a downtime situation with customers? 
  • How often and when do you refresh your older datacenters, power infrastructure / servers?
  • How often have your reviewed and improved your operational procedures in the last 12 months? Give us a few examples how you have increased resilience.

And some key internal questions, customers of IaaS vendors have to ask themselves:
  • How do you and how often do you test your power infrastructure?
  • How do you ensure your power infrastructure keeps up with demand / utilization?
  • How do you communicate with customers in case of power failure?
  • How do you determine which systems to bring up and when?
  • How do you isolate power failures and at what level to minimize downtime
  • Make sure to learn from AWS (recent) and Microsoft’s mistakes – what is your exposure to the same event? 


Overall MyPOV

Power failures are always tricky. IT is full of anecdotes of independent power supplies not starting – even in the case of formal test. But IaaS vendors need to do better and learn from what went wrong with Azure. There maybe a commonality with the recent AWS downtime, that IaaS vendors can become the victims of their own success. AWS saw more usage of S3 systems, Microsoft may have seen more utilization of the servers attached to the failing power system setup. And CAPEX demands flow into opening new data centers versus refreshing and upgrading older data centers. 
 
There is learning all around for all participants – customers using IaaS services, and IaaS providers. Redundancy always comes at a cost, and the tradeoff in regards of how much redundancy an enterprise and a IaaS providers want and need will differ from use case to use case. The key aspect is that redundancy options exist and that tradeoffs are made in an ideally fully aware state of the repercussions. And get revisited on a regular basis. 
 
Ironically for the next few years – more minor IaaS failures like this can get the level of cloud resiliency up to the levels where they should be for both IaaS vendors and IaaS consuming enterprises. As long as all keep learning and then acting appropriately.
 
 
 

 

 

 

Tech Optimization Innovation & Product-led Growth Future of Work Microsoft SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Digital Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Executive Officer Chief Operating Officer

Stanford, MIT Researchers Develop System for Private Web Queries

Stanford, MIT Researchers Develop System for Private Web Queries

Constellation Insights

There have long been options for users seeking more privacy as they browse the web, from the anti-tracking search engine DuckDuckGo to the Tor secure browser. Now teams of researchers from Stanford and MIT have developed a system they say can enable users to make website database queries—such as to look up flights or find Yelp reviews—in anonymity.

This is important because website queries can derive a great deal of information about a visitor, as the paper's lead author noted to MIT's news service:

“The canonical example behind this line of work was public patent databases,” says Frank Wang, an MIT graduate student in electrical engineering and computer science and first author on the conference paper. “When people were searching for certain kinds of patents, they gave away the research they were working on. Stock prices is another example: A lot of the time, when you search for stock quotes, it gives away information about what stocks you’re going to buy. Another example is maps: When you’re searching for where you are and where you’re going to go, it reveals a wealth of information about you.”

Wang and his co-authors will present the system in a paper this week at the USENIX Symposium on Networked Systems Design and Implementation. 

The system is called Splinter, an aptly chosen name given how it is architected. Splinter presents the user with a client through which they split queries into shares and send them to different servers hosting the same database. Splinter combines the results and returns them to the user. The system is foolproof as long as at least one server is trustworthy, according to the paper. 

Splinter isn't the first idea of its kind, of course, but will deliver much better performance and faster results through the use of a recently developed cryptographic primitive, Function Secret Sharing, as the paper notes:

For example, systems based on Private Information Retrieval ... require many round trips and high bandwidth for complex queries, while systems based on garbled circuits have a high computational cost. These approaches are especially costly for mobile clients on high-latency networks.

FSS is up to an order of magnitude quicker than previously developed systems and can often answer queries with only one network roundtrip, the paper adds.

The researchers tested Splinter using an academic dataset from Yelp, a public flight database and a public traffic database from New York City, and achieved no greater than a 1.6 second response time across all three applications. 

Overall, Constellation sees Splinter as a welcome tool for end-users in an age where their personal data is ever more increasingly being mined for commercial gain without enough transparency or returned value. Still, what seems a bit far in the future is the broad data ecosystem a service like Splinter will need to be relevant, as well as commercial viability. MIT's Wang offered this somewhat optimistic prediction:

“We see a shift toward people wanting private queries,” Wang says. “We can imagine a model in which other services scrape a travel site, and maybe they volunteer to host the information for you, or maybe you subscribe to them. Or maybe in the future, travel sites realize that these services are becoming more popular and they volunteer the data. But right now, we’re trusting that third-party sites have adequate protections, and with Splinter we try to make that more of a guarantee.”

MIT and Stanford's work appears to be very innovative, says Constellation Research VP and principal analyst Steve Wilson. "It's great to see new twists on Secret Sharing as a class of security techniques," he says. "Some of these things are provably secure in a mathematical sense, which is super valuable these days."

However, "I can't help but express some cautions," Wilson adds. "They call this a privacy solution, but really it's a secrecy solution. It stops people seeing what you're up to; it keeps your affairs hidden, but at some point you need to reveal yourself, and that's when true privacy kicks in. You need protection against misuse of your personally identifying information when someone has it.

"So in this case, there will be a splinter server—a point at which your database query gets splintered, farmed out, and the responses reassembled," he continues. "Users have to trust the splinter server to not abuse their personal information."

At this stage, "Splinter may end up becoming freeware, a gift from academia, but is it sustainable?" Wilson says. It could be very compute-intensive to run, although the researchers said their tests using Amazon Web Services found the costs to be fairly nominal.

Still, who pays? "The question of whether consumers will pay for privacy protection is vexed," Wilson says. "Consumers are usually shown to be unwilling to pay much of a premium for privacy preserving services."

The Bottom Line

"Privacy services which insert themselves into the information supply chain like this are a bit like bodyguards," Wilson says. "Perfectly understandable, but you cannot imagine a real-life situation where there is so much crime going on that everyone is encouraged to get a bodyguard. No, privacy is a public good, we all need it, it needs to be systemic, and not remedial in nature."

24/7 Access to Constellation Insights
Subscribe today for unrestricted access to expert analyst views on breaking news.

Next-Generation Customer Experience Tech Optimization Digital Safety, Privacy & Cybersecurity

UK Terror Attacks Revive Encryption Backdoor Debate, But the Debate Is Changing

UK Terror Attacks Revive Encryption Backdoor Debate, But the Debate Is Changing

Constellation Insights

Last week's UK terror attacks at in London left more than 50 people injured and four dead. The attack shocked the world, not least because it was committed not with a sophisticated weapon but by a single man with a car and a knife. The attacker, Khalid Masood, was shot dead by police but his methods won't soon be forgotten. 

It has emerged that Masood connected to the popular messaging service WhatsApp just two minutes before the attack. Like other apps such as Signal, WhatsApp uses end-to-end encryption to secure messages. UK Home Minister Amber Rudd has renewed calls for tech companies to create backdoors into their products in order to aid law enforcement agencies investigating crimes. In remarks to the BBC, Rudd said:

We need to make sure that organisations like WhatsApp, and there are plenty of others like that, don't provide a secret place for terrorists to communicate with each other.

It used to be that people would steam open envelopes or just listen in on phones when they wanted to find out what people were doing, legally, through warranty.

But on this situation we need to make sure that our intelligence services have the ability to get into situations like encrypted WhatsApp.

Rudd said she planned to meet with technology companies to make her case. WhatsApp said it is cooperating with authorities.

At the same time authorities are seeking ways into encrypted services, a fresh privacy promise is spreading throughout Silicon Valley. It's best summed up by the statement "We can't see your data," says Constellation Research VP and principal analyst Steve Wilson. This idea, that messaging or storage providers could not access or decrypt a customer’s data even if they wanted to was popularized by Apple in its dispute with the FBI.

The theme played recurringly at IBM's Interconnect event last week, Wilson notes. "For one thing, there is a strong move to pervasive encryption of data both in motion and at rest, with encryption keys controlled by the client," he says. "Under these arrangements, even if a warrant is served on a cloud provider like IBM, they might not be able to furnish copies of client data, without the client’s permission."

IBM’s new Blockchain as a Service is premised on the same principles, he adds. "I haven’t seen such a focus on cryptography standards and certification for many years." IBM is advocating for FIPS 140 and Common Criteria as benchmarks for cloud security and blockchain operations, while its Bluemix
High Security Business Network for the blockchain service has EAL 5+ security certification and FIPS 140 level 3 cryptographic key storage.

"These are the highest levels of security available outside defense departments, which indicates how seriousness IBM is taking encryption," Wilson says. "Clearly this is a doubled-edged sword. Governments should welcome IBM’s and other cloud provider’s security standards, even if the logical consequences are uncomfortable."

IBM also emphasized security containerization as the means for countering insider threats. As Wilson discusses in his research report, "Protecting Distributed Private Ledgers," private blockchains operate with much smaller consensus pools than their big public forebears. "This makes them intrinsically less tamper-resistant," Wilson says. "They also have particular exposure to rogue insiders at the host data centers. Recognizing this, IBM stressed that their private blockchains feature containerized key management, so that even the most trusted systems administrators can’t get at the keys nor the contents of a client’s ledger."

Speakers amplified the point by reminding attendees "that most notorious of all insiders, Edward Snowden, was quite a lowly admin, and look what he got away with," Wilson says. 

"Now IBM didn’t quite put it this way, but in my opinion they could say to their clients, 'Hey, you don’t need to trust us,' insofar as the most critical elements of a client’s hosted system are beyond reach of the operator. As my favorite proverb goes, 'It’s good to trust but it’s better not to.' I think I’m seeing the back of trust."

It remains to be seen what type of consensus law enforcement agencies around the world and the tech industry can come to over data access. What's clear, evidenced by the trends Wilson highlights, is that the privacy debate is getting more complicated all the time.

24/7 Access to Constellation Insights
Subscribe today for unrestricted access to expert analyst views on breaking news.

Tech Optimization Digital Safety, Privacy & Cybersecurity Chief Information Officer

IBM Blockchain as a Service and Hyperledger Fabric forge a new path

IBM Blockchain as a Service and Hyperledger Fabric forge a new path

It’s been a big month for blockchain.

  • The Hyperledger consortium released the Fabric platform, a state-of-the-art configurable distributed ledger environment including a policy editor known as Composer.
  • The Enterprise Ethereum Alliance was announced, being a network of businesses and Ethereum experts, aiming to define enterprise-grade software (and evidently adopt business speak).
  • And IBM launched its new Blockchain as a Service at the Interconnect 2017 conference in Las Vegas, where blockchain was almost the defining theme of the event.  A raft of advanced use cases were presented, many of which are now in live pilots around the world.  Examples include shipping, insurance, clinical trials, and the food supply chain.

I attended InterConnect and presented my research on Protecting Private Distributed Ledgers, alongside Paul DiMarzio of IBM and Leanne Kemp from Everledger. 

Disclosure: IBM paid for my travel and accommodation to attend Interconnect 2017.

Ever since the first generation blockchain was launched, applications far bigger and grander than cryptocurrencies have been proposed, but with scarce attention to whether or not these were good uses of the original infrastructure.  I have long been concerned with the gap between what the public blockchain was designed for, and the demands from enterprise applications for third generation blockchains or "Distributed Ledger Technologies" (DLTs).  My research into protecting DLTs  has concentrated on the qualities businesses really need as this new technology evolves.  Do enterprise applications really need “immutability” and massive decentralisation? Are businesses short on something called “trust” that blockchain can deliver?  Or are the requirements actually different from what we’ve been led to believe, and if so, what are the implications for security and service delivery? I have found the following:

In more complex private (or permissioned) DLT applications, the interactions between security layers and the underlying consensus algorithm are subtle, and great care is needed to manage side effects. Indeed, security needs to be rethought from the ground up, with key management for encryption and access control matched to often new consensus methods appropriate to the business application. 

At InterConnect, IBM announced their Blockchain as a Service, running on the “Bluemix High security business network”.  IBM have re-thought security from the ground up.  In fact, working in the Hyperledger consortium, they have re-engineered the whole ledger proposition. 

And now I see a distinct shift in the expectations of blockchain and the words we will use to describe it.

For starters, third generation DLTs are not necessarily highly distributed. Let's face it, decentralization was always more about politics than security; the blockchain's originators were expressly anti-authoritarian, and many of its proponents still are. But a private ledger does not have to run on thousands of computers to achieve the security objectives.  Further, new DLTs certainly won't be public (R3 has been very clear about this too – confidentiality is normal in business but was never a consideration in the Bitcoin world).  This leads to a cascade of implications, which IBM and others have followed. 

When business requires confidentiality and permissions, there must be centralised administration of user keys and user registration, and that leaves the pure blockchain philosophy in the shade. So now the defining characteristics shift from distributed to concentrated.  To maintain a promise of immutability when you don't have thousands of peer-to-peer nodes requires a different security model, with hardware-protected keys, high-grade hosting, high availability, and special attention to insider threats. So IBM's private blockchains private blockchains run on the Hyperledger Fabric, hosted on z System mainframes.  They employ cryptographic modules certified to Common Criteria EAL 5-plus and others that are designed to FIPS-140 level 4 (with certification underway). These are the highest levels of security certification available outside the military. Note carefully that this isn't specmanship.  With the public blockchain, the security of nodes shouldn't matter because the swarm, in theory, takes care of rogue miners and compromised machines. But the game changes when a ledger is more concentrated than distributed.  

Now, high-grade cryptography will become table stakes. In my mind, the really big thing that’s happening here is that Hyperledger and IBM are evolving what blockchain is really for

The famous properties of the original blockchain – immutability, decentralisation, transparency, freedom and trustlessness – came tightly bundled, expressly for the purpose of running peer-to-peer cryptocurrency.  It really was a one dimensional proposition; consensus in particular was all about the one thing that matters in e-cash: the uniqueness of each currency movement, to prevent Double Spend.

But most other business is much more complex than that.  If a group of companies comes together around a trade manifest for example, or a clinical trial, where there are multiple time-sensitive inputs coming from different types of participant, then what are they trying to reach consensus about?

The answer acknowledged by Hyperledger is "it depends". So they have broken down the idealistic public blockchain and seen the need for "pluggable policy".  Different private blockchains are going to have different rules and will concern themselves with different properties of the shared data.  And they will have different sub-sets of users participating in transactions, rather than everyone in the community voting on every single ledger entry (as is the case with Ethereum and Bitcoin).

These are exciting and timely developments.  While the first blockchain was inspirational, it’s being superseded now by far more flexible infrastructure to meet more sophisticated objectives.  I see us moving away from “ledgers” towards multi-dimensional constructs for planning and tracing complex deals between dynamic consortia, where everyone can be sure they have exactly the same picture of what’s going on. 

In another blog to come, I’ll look at the new language and concepts being used in Hyperledger Fabric, for finer grained control over the state of shared critical data, and the new wave of applications. 

 

Digital Safety, Privacy & Cybersecurity Future of Work Matrix Commerce Tech Optimization Innovation & Product-led Growth IBM AI Blockchain Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Supply Chain Officer Chief Digital Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Cloudera Focuses Message, Takes Fifth On Pending Moves

Cloudera Focuses Message, Takes Fifth On Pending Moves

Cloudera executives can’t talk about IPO or cloud-services rumors. Here what’s on the record from the Cloudera Analyst Conference.

There were a few elephants in the room at the March 21-22 Cloudera Analyst Conference in San Francisco. But between a blanket “no comment” about IPO rumors and non-disclosure demands around cloud plans -- even whether such plans exist, or not -- Cloudera execs managed to dance around two of those elephants.

The third elephant was, of course, Hadoop, which seems to be going through the proverbial trough of disillusionment. Some are stoking fear, uncertainty and doubt about the future of Hadoop. Signs of the herd shifting the focus off Hadoop include Cloudera and O’Reilly changing the name of Strata + Hadoop World to Strata Data. Even open-source zealot Hortonworks has rebranded its Hadoop Summit as  DataWorks Summit, reflecting that company’s diversification into streaming data with its Apache NiFI-based Hortonworks DataFlow platform.

Mike Olson, Cloudera's chief strategy officer, positions the company as a major vendor
of enterprise data platforms based on open-source innovation.

At the Cloudera Analyst Conference, Chief Strategy Officer Mike Olson said that he couldn’t wait for the day when people would stop describing his company as “a Hadoop software distributor” mentioned in the same breath with Hortonworks and MapR. Instead, Olson positioned the company as a major vendor of enterprise data platforms based on open-source innovation.

MapReduce (which is fading away), HDFS and other Hadoop components are outnumbered by other next-generation, open-source data management technologies, Olson said, and he noted that there are some customers who are just using Cloudera’s distributed and supported Apache Spark on top of Amazon S3, without using any components of Hadoop.

Cloudera has recast its messaging accordingly. Where years ago the company’s platform diagrams detailed the many open source components inside (currently about 26), Cloudera now presents a simplified diagram of three use-case-focused deployment options (shown below), all of which are built on the same “unified” platform.

Cloudera-developed Apache Impala is a centerpiece of the Analytic DB offering, and it competes with everything from Netezza and Greenplum to cloud-only high-scale analytic databases like Amazon Redshift and Snowflake. HBase is the centerpiece of the Operational DB offering, a high-scale alternative to DB2 and Oracle Database on the one hand and Cassandra, MapR and MemSQL on the other. The Data Science & Engineering option handles data transformation at scale as well as advanced, predictive analysis and machine learning.

Many companies start out with these lower-cost, focused deployment options, which were introduced last year. But 70% to 75% percent of customers opt for Cloudera’s all-inclusive Enterprise Data Hub license, according to CEO Tom Reilly. You can expect that if and when Cloudera introduces its own cloud services, it will offer focused deployment options that can be launched, quickly scaled and just as quickly turned off, taking advantage of cloud economies and elasticity.

Navigating around the non-disclosure requests, here are a few illuminating factoids and updates from the analyst conference:

Cloudera Data Science Workbench: Announced March 14, this offering for data scientists brings Cloudera into the analytic tools market, expanding its addressable market but also setting up competition with the likes of IBM, Databricks, Domino Data, Alpine Data Labs, Dataiku and a bit of coopetition with partners like SAS. Based on last year’s Sense acquisition, Data Science Workbench will enable data scientists to use R, Python and Scala with open source frameworks and libraries while directly and securely accessing data on Hadoop clusters with Spark and Impala. IT provides access to the data within the confines of Hadoop security, including Kerberos.

Apache Kudu: Made generally available in January, this Cloudera-developed columnar, relational data store provides real-time update capabilities not supported by the Hadoop Distributed File System. Kudu went through extensive beta use with customers, and Cloudera says it’s seeing a split of deployment in conjunction with Spark, for streaming data applications, and with Impala, for SQL-centric analysis and real-time dashboard monitoring scenarios.

MyTake On Cloudera Positioning and Moves

Yes, there’s much more to Cloudera’s platform than Hadoop, but given that the vast majority of customers store their data in what can only be described as Hadoop clusters, I expect the association to stick. Nonetheless, I don’t see any reason to demure about selling Hadoop. Cloudera isn’t saying a word about business results these days -- likely because of the rumored IPO. But consider the erstwhile competitors. In February Hortonworks, which has been public for two years, reported a 39% increase in fourth-quarter revenue and a 51% increase on full-year revenue (setting aside the topic of profitability). MapR, which is private, last year claimed (at a December analyst event) an even higher growth rate than Hortonworks.

Assuming Cloudera is seeing similar results, it’s experiencing far healthier growth than any of the traditional data-management vendors. Whether you call it Hadoop and Spark or use a markety euphemism like next-generation data platform, the upside customers want is open source innovation, distributed scalability and lower cost than traditional commercial software.

As for the complexity of deploying and running such a platform on premises, there’s no getting around the fact that it’s challenging – despite all the things that Cloudera does to knit together all those open-source components. I see the latest additions to the distribution, Kudu and the Data Science Workbench, as very positive developments that add yet more utility and value to the platform. But they also contribute to total system complexity and sprawl. We don’t seem to be seeing any components being deprecated to simplify the total platform.

Deploying Cloudera’s software in the cloud at least gives you agility and infrastructure flexibility. That’s the big reason why cloud deployment is the fastest-growing part of Cloudera’s business. If and when Cloudera starts offering its own cloud services, it would be able to offer hybrid deployment options that cloud-only providers, like Amazon (EMR) and Google (DataProc) can’t offer. And almost every software vendor embracing the cloud path also talks up cross-cloud support and avoidance of lock-in as differentiators compared to cloud-only options.

I have no doubt that Cloudera can live up to its name and succeed in the cloud. But as we’ve also seen many times, the shift to the cloud can be disruptive to a company’s on-premises offerings. I suspect that’s why we’re currently seeing introductions like the Data Science Workbench. It’s a safe bet. If and when Cloudera truly goes cloud, and if and when it becomes a public company, things will change and change quickly.

Related Reading:
Google Cloud Invests In Data Services, Scales Business
Spark Gets Faster for Streaming Analytics
MapR Ambition: Next-Generation Application Platform

 

Media Name: Mike Olson.jpg
Media Name: Cloudera Deployment Packages.jpg
Data to Decisions Tech Optimization Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI ML Machine Learning LLMs Agentic AI Generative AI Robotics Analytics Automation Quantum Computing developer Metaverse VR Healthcare Supply Chain Leadership business Marketing finance Customer Service Content Management Chief Information Officer Chief Digital Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

One Word to Help Save Sears: Auctions

One Word to Help Save Sears: Auctions

1

Sears continues to fall, and may soon be out of business.

If I were their CEO and someone asked me how to turn the store around with one last Hail Mary pass, my answer would be one word…”auctions”.

Yes, make every item available in every Sears store available for auction.

Bear with me.

The challenge for retail chains like Sears is that you have the in-store experience, and you have the online experience, and the two rarely converge or complement each other.

When you’re shopping in-store and you find an item and you may wonder: should I buy it, should I wait for it to go on sale, should I try to find it at another store, or should I shop online?

And when you’re shopping online, you simply search for the product and buy it at whatever store has the cheapest price (which is often Amazon).

So how do retailers overcome this?

By using auctions that:

  • encourage shoppers to by from you (and not competitors)
  • encourage in-store visits
  • converge in-store and online shopping experiences
  • encourage loyalty using gamification

Here’s an example.

I walk into Sears looking for a coffee machine.  I see one that I like and it’s $250.

Just like an eBay auction, the $250 is the “Buy Now” price, what I have to pay to buy it right now.

If I pay the “Buy Now” price I’ll automatically get 10 points (I’ll get to the points in a minute).

My other option is to not buy now, but bid on the coffee machine.

So I take out my Sears app on my phone, and I scan the coffee machine UPC code and enter into an auction for the coffee machine.

The coffee machine is automatically added to Sears website and available for bidding by online users.

And just like eBay, there’s a minimum price Sears has set (that no one knows) so that it won’t lose money on the sale, and the auction is open for seven days.  And anyone online can bid on it.

Therefore, if I want to try to get the coffee machine at a lower price (say I bid a maximum of $200), I have to not only enter into an auction I may not win, but I have to wait seven days to purchase it if I do win.

To bid, or not to bid (and just buy)?

Here’s where the points system comes in.

You want to encourage certain behaviour:

  • Shopping in-store
  • Purchasing at “Buy Now” prices instead of always seeking lowest price through auctions
  • Repeat purchases

So a point system would be used as a sort of gamification to reward certain behavior.

For example, someone who has shopped a lot in-store, and has purchased a lot of “Buy Now” prices would have a certain amount of points, or a certain score.  So, if they did enter an auction, they would have more “buying power” over people who had smaller scores.  Thus, a Sears shopper with a 90 score who bids $200 on my coffee machine would win the auction over shoppers with a score of 5, even if the shopper with the lower score bid $205 on the coffee machine.

And loyal shoppers with high scores will also be rewarded with offers that drive them back into the store: Sears Travel, Sears Photos, Sears Makeovers, etc.  (in other words, those things that you can’t necessarily buy online but have to visit the store for).

Retailers like Sears need to understand that with online shopping, people are going to price and compare whether they’re in the store or not.

In fact, technically your items are already in an auction – with the prices of your competitors.

So why not bring an auction-style system right to your customer, and reward them and their loyalty in the process?

Sound crazy?

You bet.

But so is the status quo, and that hasn’t been working too well for Sears.

The post One Word to Help Save Sears: Auctions appeared first on senseimarketing.testing.our-projects.info.

Marketing Transformation

LinkedIn Unveils Enterprise Edition of Sales Navigator, Extends Integration with CRM Systems

LinkedIn Unveils Enterprise Edition of Sales Navigator, Extends Integration with CRM Systems

Constellation Insights

LinkedIn is betting large organizations will be willing to pay up to $1,600 per seat per year for a new Enterprise edition of Sales Navigator, which it says will generate higher productivity and results for social selling efforts. Here are the key details from LinkedIn's announcement:

Until now, if you were looking for a warm introduction to a lead, you could go through your personal LinkedIn connections, or use TeamLink, which pools the networks of all the Sales Navigator seat holders in your company. But we know your reps are probably not connected on LinkedIn to the vast majority of employees at your company, and not every employee in your company needs a seat of Sales Navigator (as much as we’d like that).

TeamLink Extend solves that by letting anyone in your organization opt-in their LinkedIn network to the TeamLink pool. That means, if you’re trying to reach a prospect, you can quickly see if anyone in your company has a connection with that person, and reach out to your colleague to ask for warm introduction.

LinkedIn is also integrating Enterprise Edition with its PointDrive tool, which gives salespeople the ability to give prospects more content through a desktop or mobile app instead of an email larded with attachments, giving reps visibility into how the materials are being consumed. 

Perhaps the most telling piece of news for the longer-term is LinkedIn Enterprise's enhanced CRM integration. Its CRM Sync function will log Sales Navigator activities into CRM systems with a single click. This capability will be available for Salesforce first, not Dynamics CRM, although support is coming for other platforms this year. 

LinkedIn Enterprise also includes CRM Widgets, which enable users to view Sales Navigator profile details within CRM systems. There are widgets for Salesforce and Dynamics now, with ones for Oracle, NetSuite, SugarCRM, Hubspot, SAP Hybris and Zoho coming soon.

Analysis: No Walled Garden Here, But One Caution Abounds

Salesforce CEO Marc Benioff, who was outbid for LinkedIn by Microsoft, complained last year to regulators, alleging that Redmond would close off third-party access to LinkedIn's vast and valuable store of business data in favor of Dynamics CRM. The new integration points for LinkedIn Enterprise Edition suggest that on the contrary, Microsoft sees plenty of money in integrating LinkedIn with competing CRMs. Constellation believes this is a good approach not only for Microsoft but for all customers, as the potential value of alignment of CRM with LinkedIn still has plenty of runway. 

But the new TeamLink feature shows Microsoft clearly wants to see how much value it can squeeze out of LinkedIn's data pool by leveraging its social graph. There are some challenges here, says Constellation Research VP and principal analyst Cindy Zhou.

One concern is with how organizations will handle the opt-in to share contacts. The less employees do, the less effective TeamLink becomes, she notes. There's also potential for spamming. "Organizations using TeamLink will need to be aware of the responsibility to properly train users so they don't abuse this additional access to connections," she says. "Ultimately. the connections didn't 'opt-in' for their information to used by a broader enterprise sales team."

24/7 Access to Constellation Insights
Subscribe today for unrestricted access to expert analyst views on breaking news.

Marketing Transformation Next-Generation Customer Experience

Digital Business Distributed Business and Technology Models Part Two; The Dynamic Infrastructure

Digital Business Distributed Business and Technology Models Part Two; The Dynamic Infrastructure

The Digital Business model with its dynamic adaptive capabilities to react to events with intelligently orchestrated responses forms from Services requires a very different enabling infrastructure to that of current Enterprise IT systems. As the Enterprise itself decentralizes into fast moving agile operating entities operating under an OpEx (costs allocated to actual use) management model so must the Infrastructure support with similar functional structure.

The Technology that creates and supports Digital Business does not resemble that deployed in support of Enterprise Client-Server IT systems. Neither is it a rehash of the standard Internet Web architecture. Instead a combination of Cloud Technology, both at the center and increasingly the edge, running Apps, in the form of Distributed Apps, linked by massive scale IoT interactions, and increasingly various forms of AI intelligent reactions represent a wholly different proposition.

In existing Enterprise IT the arrangement and integration of the technology complexities is defined by Enterprise Architecture, the term has not been used above deliberately to highlight the difference. In contrast with the enclosed, defined Enterprise IT environment where it is necessary to determine the relationship between the finite number technology elements; a true Digital Enterprise operates dynamically between an infinite numbers of technology elements, internally and externally.

Enterprise IT, for the most part, supports Client-Server applications, as evidenced in ERP, and is focused on ensuring the outcomes of all transactions will maintain the common State of all data. To do this the dependencies of all technology elements have to be identified in advance and integrated in fixed close-coupled relationships. It is important to remember that Enterprise Architecture was developed to deploy the Enterprise Business model defined by Business Process RE-engineering, (BPR).

It is vital to recognize that the Enterprise Business model and the Technology model are, or should be, two sides of the same coin coherently working together to enable the Enterprise to compete in its chosen market and manner. The introduction of a Digital Business model introduces a completely different set of technology requirements, and importantly requires to reverse accepted IT Architecture by requiring Stateless, Loose coupled, orchestrations to support Distributed Environments.

These simple statements cover some very complicated issues, and before going further the three important issues should be identified and clarified within the context used here;

  1. Stateful means the computer or program keeps track of the state of interaction, usually by setting values in a storage field designated for that purpose. Stateless means there is no record of previous interactions and each interaction request has to be handled based entirely on information that comes with it. Reference http://whatis.techtarget.com/definition/stateless
  2. Tightly-Coupledhardware and software are not only linked together, but are also dependent upon each other. In a tightly coupled system where multiple systems share a workload, the entire system usually would need to be powered down to fix a major hardware problem, not just the single system with the issue. Loosely-Coupled describes how multiple computer systems, even those using incompatible technologies, can be joined together for transactions, regardless of hardware, software and other functional components. References http://www.webopedia.com/TERM/T/tight_coupling.html http://www.webopedia.com/TERM/L/loose_coupling.html
  3. Digital Business is the creation of new business designs by blurring the digital and physical worlds. ... in an unprecedented convergence of people, business, and things that disrupts existing business models. Reference https://www.i-scoop.eu/digital-business .

Clearly there is a need for something to act as an equivalent for the Enterprise Architecture, and indeed there is no shortage of activities to create ‘Architectural’ Models for IoT. There is a fundamental challenge in the sheer width of what constitutes a Digital Market connected through IoT in different industry sectors. Though it might seem that the approach for a Smart Home is not likely to have much in common with Self-driving cars, other than both being part of a Smart City, at the level of the supporting infrastructure there are minimal differences.

The result is an over whelming abundance of standards bodies, technology protocols and architectural models that will in the short term confuse rather than assist. A read through the listings covering each of those areas here will prove this point. Whilst there is no doubt the devil is in the detail and these things matter IoT deployments should be driven from the Digital Business model outlined in the previous blog post.

A blog is not the format to examine this topic in detail; instead the aim is to provide an overall understanding of a workable approach. And to make use of the views and solution sets available from leading Technology vendors to provide greater detail. The manner of breaking down the ‘architecture’ into abstracted four conceptual layers illustrated below matches almost exactly with the Technology vendors own focus points.

Enterprise Architecture methodologies start with a conceptual stage; an approach designed to provide clarification of the overall solution and outcome. This is necessary to avoid the distraction of the specific products details, often introducing unwelcome dependencies, at the first stage of the shaping the solution/outcome vision.

The four layers illustrated correspond to the major conceptual abstractions present in building, deploying, and operating the necessary Technology model for a Digital Business. This blog focuses on the Dynamic Infrastructure and each of the following blogs in the series will focus on one of the abstracted layers.

The following concentrates on the role and in particular on Enterprise owned and operated infrastructure. The same basic functionality could be provided from a Cloud Services operator. There are significant issues around latency and risks in certain areas, such as ‘real time’ machinery operations as an example, that will lead to the selection of on-premises Dynamic Infrastructure capability. It is most likely that a mix of external and internal Dynamic Infrastructure will be deployed in most Enterprises with the Distributed Services Technology Management layer providing the necessary cohesive integration. A point made in the following Part 3b of this series.

The Dynamic Infrastructure shares many of the core traits of Internet and Cloud Technology in providing capacity, as and when required, in response to demands. The development of the detailed specification started in 2012 with the publication by Cisco of a white paper calling for a new model of distributed Cloud processing across a network. Entitled ‘Fog Computing’ this concept became increasingly important with the development of IoT redefining requirements.

In November 2015 a group of leading Industry Venders, (ARM, Cisco, Dell, Intel and Microsoft), founded the OpenFog Consortium. Today there are 56 members including a strong representation from the Telecoms Industry. Cisco has developed its products and strategy in tune with the vision statement of the OpenFog Consortium that states the requirement to be;

“Fog computing is a system-level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the continuum from Cloud to Things. By extending the cloud to be closer to the things that produce and act on IoT data, fog enables latency sensitive computing to be performed in proximity to the sensors, resulting in more efficient network bandwidth and more functional and efficient IoT solutions. Fog computing also offers greater business agility through deeper and faster insights, increased security and lower operating expenses”

It is worth pointing out there are subtle, but important, differences between Fog Computing, and pure Edge Based Cloud Computing. Edge Based solutions more closely resemble a series of closed activity pools with relatively self contained computational requirements, where as Fog Computing processing is more interactive and distributed using a greater degree of high level service management from the Network. Naturally the two definitions overlap and this together with other terms can be confusing. In practice, it is important to note that “Fog” certainly includes “Edge”, but the term Edge is often used indicate a more standalone functionality.

Three Technology vendors have focused their products and solution capabilities around providing such an infrastructure, with its mix of connectivity and processing triggered by a sophisticated management capability. Each vendor uses different terminology and has published their definitions on what they identify as the challenges and requirements.

Constellation Research would like to thank Cisco, Dell and HPE for contributing the following overviews that describe their point of view in respect of building and operating a Dynamic Infrastructure. Each vender also provided links to enable a more detailed evaluation to be made of their approach and products.

 

Cisco’s Digital Network Architecture

At Cisco, we are changing how networks operate into an extensible, software driven model that makes networks simpler and deployments easy. Customer requirements for digital transformation go beyond technology such as IOT and require that the network can handle changes, security, and performance in a policy-based manner designed around the application and business need.

Network Architecture is the framework for that network change moving from a highly resources intensive and time consuming way of deploying network services and segments to a model that is built to speed these processes and reduce cost. With DNA, we are focusing on automating, analyzing securing and virtualization of network functions. Networks need to be more than just a utility, they need to be business driving and secure in the proactive and reactive sense. To do this Cisco is building on our industry leading security products combined with our industry leading access products (including SD-WAN, wireless, and switching) we are helping customers change how they fundamentally work and to embrace the digital transformation.

Some examples of our continued innovation in this space include products like APIC-EM, the central engine of our Cisco DNA. APIC-EM delivers software-defined networking capabilities with policy and a simple user interface. It offers Cisco Intelligent WAN, Plug and Play for deploying Cisco enterprise routers, switches, and wireless controllers, Path Trace for easy trouble shooting, and Cisco Enterprise Service Automation.

Cisco is more than a networking vendor, we partner with our customers at all levels. We strive to understand not only what customers need at a technical and IT level, but what they need as a business. Cisco brings consistent and long term investment into its products and services, adding value and features constantly. Nobody in the networking market invests in R&D and listens to customers like Cisco does. Cisco knows that the changing face of IT is to help bridge the gap to cloud and make sure that business needs are met with agile solutions that enhance the business. With Cisco DNA, CIOs, managers, and administrator all get what they need to move forward with digital transformation and IOT.

The details of the Cisco range of products, and solutions, can be found in in three places; One, Two, Three

 

Dell Technologies Internet of Things Infrastructure

With the industry’s broadest IoT infrastructure portfolio together with a rapidly growing ecosystem of curated technology and services partners, Dell Technologies cuts through the complexity and enables you to access everything you need to deploy an optimized IoT solution from edge to core to cloud. By working with Dell’s infrastructure and curated partners they also provide proven use-case specific solution blueprints to help you achieve faster ROI. Dell has strong credibility to play in Industrial IoT from its origins in the supply of computing to the Industrial sector, as an early leader in sensor-driven automation, and through the EMC acquisition, which adds additional expertise in storage, virtualization, cloud-native technologies, and security and system management. Further, Dell Technologies is leading multiple open source initiatives to facilitate interoperability and scale in the market since getting access to the myriad data generated by sensors, devices, and equipment is currently slowing down IoT deployments.”

The challenge with IoT is to securely and efficiently capture massive amounts of data for analytics and actionable insights to improve your business. Dell Technologies enables the flexibility to architect an IoT ecosystem appropriate for your specific business case with analytics, compute, and storage distributed where you need it from the network’s edge to the cloud.

Part of Dell’s net-new investment in IoT is a portfolio of purpose-built Edge gateways with specific I/O, form factor and environmental specifications to connect the unconnected capturing data from a wide variety of sensors and equipment. The Dell Edge Gateway line offers processing capabilities to start the analytics process to cleanse the data as well as comprehensive connectivity to ensure that the critical data can be integrated into digital business systems where insights can be created and business value generated. These gateways also offer integrated tools for both Windows and Linux operating systems to ensure that the distributed architecture can be secured and managed. Reference here

Further, Dell EMC empowers organizations to transform business with IoT as part of the digitization initiative. The Dell EMC’s converged solution including Vblock Systems, VxRack Systems, VxRail Systems, PowerEdge and other Dell EMC products are prevalent in the core data centers for enterprise applications, big data and video management software (VMS) as well as for cloud native applications. Dell simplifies how businesses can tap IoT as part of their digital assets — from edge with Dell’s Edge Gateways tied to sensors and operational technology to core data center and hybrid cloud from Dell EMC plays an crucial role for blending historical and real-time analytics, processing and archival. The Dell EMC Native Hybrid Cloud Platform, a turnkey digital platform accelerates time to value by simplifying the use of in IoT as part of cloud native app deployment. Included in this portfolio is the Analytic Insights Module, a fully-engineered solution providing self-service data analytics with cloud-native application development into a single hybrid cloud platform, eliminating the months it takes to build your own.

The details of Dell range of products, and solutions, can be found here

 

HPE’s Hybrid IT

HPE believes that there are a number of dimensions to dynamic infrastructure. It is estimated that 40-45% of IoT data processing will occur “at the edge” - close to where the sensors and actuators are. This is why they have created their “EdgeLine” range of edge compute devices. HPE calls this the first dimension of Hybrid IT - getting the right mix of edge and core compute.

While “real-time” processing of IoT data will occur both at the edge and at the core, “deep analytics” like design simulations and deep learning that a digital world requires may need specialised computers because Moore’s law is running out of steam. HPE believes, another dimension to Hybrid IT is the mix of conventional versus specialised compute. HPE’s specialised compute includes their SuperDome and the SGI ranges.

Digitiziation is forcing a change in the architecture of applications. Gone are the three tier, web client to app server to database applications. These are replaced by application and service meshes - meshes of services that applications can call. This is why micro-service and containers are becoming so popular (Docker has been downloaded over 4 billion times, for example). HPE built its Synergy servers with this new application architecture in mind:

  • CPU, storage and fabric can be treated as independently scalable resource pools. This scaling can be applied to both physical infrastructure (for containers running directly on top of the hardware) and virtual machines.
  • Infrastructure desired state can be specified in code. This allows the infrastructure on which an application is run to put under source control with the source code
  • Because containers carry their required infrastructure specification with them, this specification can be given directly to the Synergy server for provisioning before containers are layered on top

Full details on HPE Infrastructure products can be found here.

 

Addendum

A distributed system is a model in which components located on networked computers communicate and coordinate their actions by passing messages.[1] The components interact with each other in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components. Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers, which communicate with each other by message passing. https://en.wikipedia.org/wiki/Distributed_computing

A Smart System is a distributed, collaborative group of connected Devices and Services that react to a continuous dynamic changing condition by invoking individual, or groups, of Smart Services to deliver optimized outcomes. The term originated in industrial automation and therefore the current Wikipedia definition seems somewhat limited in its scope when compared to the wider IoT use of the term.

New C-Suite Innovation & Product-led Growth Tech Optimization Future of Work AI ML Machine Learning LLMs Agentic AI Generative AI Analytics Automation B2B B2C CX EX Employee Experience HR HCM business Marketing SaaS PaaS IaaS Supply Chain Growth Cloud Digital Transformation Disruptive Technology eCommerce Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP Leadership finance Customer Service Content Management Collaboration M&A Enterprise Service Chief Information Officer Chief Technology Officer Chief Digital Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Executive Officer Chief Operating Officer

Exposure of Australian Officials' Private Phone Numbers Highlights Security's 'Human Error' Factor

Exposure of Australian Officials' Private Phone Numbers Highlights Security's 'Human Error' Factor

Constellation Insights

Earlier this month, it emerged that a major Amazon Web Services outage was caused by an engineer making a typo while debugging a system. While not the same thing, the accidental exposure of hundreds of Australian politicians and staffers' private mobile phone numbers serves as another reminder that when it comes to security, human error can trump any number of technological measures. The Sydney Morning Herald has the details:

The Department of Parliamentary Services failed to properly delete the numbers before it published the most recent round of politicians' phone bills on the Parliament House website, potentially compromising the privacy and security of MPs from cabinet ministers down.

While in previous years the numbers were taken out of the PDF documents altogether, this time it appears the font was merely turned white - meaning they could still be accessed using copy and paste.

The only numbers absent were those of the very top cabinet ministers including Prime Minister Malcolm Turnbull, Treasurer Scott Morrison, Attorney-General George Brandis and a handful of others.

The department has blamed a private contractor, TELCO Management, for the stuff-up. 

DPS officials have since deleted the private numbers after receiving word about them from the newspaper.

"I really wish we were all a bit more self-conscious about this style of error," says Constellation Research VP and principal analyst Steve Wilson. "We have a host of office tools which are incredibly rigid when you think about it. Our computers are wretchedly unforgiving. 

"In this latest case, someone has deleted some sensitive data in a file, or they thought they had deleted it, but no, the data was still there, hidden, and it cropped up again when the file was moved to a public location," Wilson adds. 

As it happens, the Australian government is becoming a bit notorious for this kind of thing. Other recent episodes include the release of passport details of 20 or so visiting heads of state, Wilson notes. And worse, the inadvertent publication of names and addresses and other details of 10,000 refugee asylum seekers, many of whom were in personal danger in their countries of origin. "Are we just too laid back down under?" he says.

The truth is that these are the "sorts of mistakes anyone without a master's degree in computing might make," Wilson adds. "Computers are like nitroglycerine. They're kind of safe if you're unnaturally careful in the way you handle them."

Moreover, when correcting a security breach it's crucial to consider other ways compromised data may still be exposed. The website Junkee found that even after the DPS deleted the phone bills, copies of them remained available in Google's cache and the numbers were actually openly visible. They've since been removed from Google's servers.

24/7 Access to Constellation Insights
Subscribe today for unrestricted access to expert analyst views on breaking news.

Digital Safety, Privacy & Cybersecurity Chief Information Officer