Results

IBM Next Steps With Machine Learning: Mainframe and Power

IBM Next Steps With Machine Learning: Mainframe and Power

IBM Machine Learning for z/OS could be a boon to big banks and insurance companies that want advanced analytics on the mainframe. Next up is the IBM Power platform.

Public cloud providers have popularized machine learning with low-cost, easily accessible services, but that’s a separate world from the tightly regulated, on-premises computing environments maintained by many big banks and insurance companies. Now IBM is bringing cutting-edge analytics to these mainframe customers with IBM Machine Learning (IBM ML) for z/OS.

Announced February 15 in New York, IBM ML is a private-cloud-only offshoot IBM Watson Machine Learning , the public-cloud service on IBM Bluemix. More than 90 percent of data still resides in private data centers, according to IBM, and the company is in a unique position to bring the latest in analytics to these environments starting with the IBM mainframe.

Thousands of companies still rely on IBM System z mainframes, including 44 of the top 50 global banks, 10 out of 10 of the largest insurers and 90 percent of the world’s airlines. These organizations have been among the most conservative about moving their core transactional applications to new platforms. That does not mean, however, that they are not interested in taking advantage of advanced analytics.

IBM Machine Learning for z/OS will bring transaction-time analytics to the mainframe
environments still heavily used by big banks and insurance companies.

Heretofore the likes of big banks and insurance companies have used sampling methods or batchy, bulk-data-movement to support predictive analytics. Hadoop-based data lakes, for example, are often used for customer 360 and risk analyses, and machine learning is increasingly popular in that role. But these approaches introduce data-movement costs, human-intensive manual worksteps and latency. The ideal in analytics, and the goal with IBM ML for z/OS, is to bring the analytics to the data rather than moving the data to a separate analytics environment. IBM ML for z/OS relies on an external X86 server and z Integrated Information Processors (zIIP coprocessors), so it doesn’t impact production performance or increase (expensive) mainframe processing cycles.

IBM ML for z/OS has been in beta since October, says IBM, and 20 organizations have been part of the beta program. Most of those organizations are banks and insurance companies, and many are seeking an alternative to rules-based and table-based systems that provide more primitive and brittle predictive capabilities. With machine learning applied directly to data in the mainframe environment, IBM ML promises more accurate, customer-specific prediction and, therefore, more extensive automation at the time of the transaction.

American Federal Credit Union, one IBM ML beta client, currently automates 25 percent of lending decisions while the remaining 75 percent go to underwriters. IBM says early testing for American Federal showed that IBM ML promises to automate 90 percent of the workload that would otherwise go to underwriters. Another beta customer, Argus Health, is using IBM ML for z/OS to apply and continuously update models and scores against payer, provider, and phama-benefits data in order to predict outcomes and improve the effectiveness of treatments. Banks and insurance companies have been the first in line for IBM ML for z/OS, but IBM expects airlines to use the system for applications including predictive maintenance.

IBM says it intends support analytics with a choice of languages, frameworks and platforms. At launch IBM ML for z/OS is based on Scala and uses the Spark ML library, but there are plans to support R, Python, TensorFlow and other languages and libraries. To make life easier for developers, IBM ML for z/OS includes and optimized data layer built by Rocket Software to connect to mainframe sources such as DB2, VSAM, ISM as well as non-mainframe data sources. In a demo at the announcement event, an IBMer correlated data from the cloud-based Twitter Insights service on IBM Bluemix with transactional records on the mainframe to support customer churn analysis.

IBM ML includes the company’s Cognitive Assistant for Data Science (CADS), which automates the selection of best-fit algorithms for the modeling scenario at hand. IBM’s software also includes model-management and governance capabilities that are essential in regulated environments.

Beyond adding support for more languages and machine learning libraries, the next big step for IBM ML will be support for the IBM Power platform, which also supports workload that tend to remain in private-cloud environments. IBM last November announced PowerAI Suite software for a high-performance-computing-specific IBM server that pairs Power8 chips with NVidia Graphical Processing Units (GPUs). The combination supports machine learning libraries such as Caffe, Torch, and Theano and last month added Google’s hot TensorFlow deep learning framework to the mix. IBM ML support would add CADS for automated algorithm selection as well as IBM’s model-management and governance capabilities.

The roadmap for IBM ML calls for more choices of languages, machine learning and
deep learning libraries and support for IBM’s Power Systems platform.

MyPOV on IBM ML

It only makes sense for IBM to bring its latest analytical capabilities to System z and Power customers. Whether those customers have already turned elsewhere for predictive capabilities, and whether IBM ML for z/OS, or when available, IBM ML for Power, are better alternatives are separate questions. Analytic latency and data movement at high scale are both undesirable. But to what extent have companies already offloaded historical data from the mainframe onto lower-cost platforms? That would have a big impact on the accuracy and appeal of IBM ML. And to what extend are prospects relying on hard-to-maintain rules-or table-based systems if they’re not using more advanced forms of prediction?

Applying prediction at the transaction is clearly desirable, but companies including IBM have offered answers to this challenge before. To beat out the options already in place, IBM ML must offer lower latency, more accurate predictions, a higher level of automation, lower total cost of ownership or all of the above. IBM had a lot to say about lower latency and, through automated best-fit algorithm selection, better accuracy. We’re looking forward to conversations with early adopters to hear their take on the advantages of IBM ML over alternative routes to predictive insight.

Related Reading:
Spark Gets Faster For Streaming Analytics
Virginia Tech Fights Zika With High-Performance Prediction

NRF Big Show 2017 Spotlights Data-Driven Imperatives

Media Name: IBM ML for zOS.jpg
Media Name: IBM ML Roadmap.jpg
Tech Optimization Data to Decisions Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity IBM ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

Nokia Creates Global Network Grid for IoT

Nokia Creates Global Network Grid for IoT

Constellation Insights

Nokia is betting it can be a player in IoT by offering enterprises a single place to acquire a global IoT networking footprint. Here are the key details from its announcement at Mobile World Congress:

Nokia WING will manage the IoT connectivity and services needs of a client's assets, such as connected cars or connected freight containers, as they move around the globe, reducing the complexity for enterprises who would otherwise be required to work with multiple technology providers.

Connectivity is enabled by intelligent switching between cellular and non cellular networks. For example, a shipping container linked by satellite in the ocean could switch to being connected by a cellular network near a port.

Nokia will offer a full service model including provisioning, operations, security, billing and dedicated enterprise customer services from key operations command centers. The company will use its own IMPACT IoT platform for device management, subscription management and analytics. Nokia IMPACT subscription management for eSIM will automatically configure connectivity to a communication service provider's network as the asset crosses geographical borders.

Communication service providers can quickly take advantage of new business opportunities that will be made available by joining a global federation of IoT connectivity services. By leveraging their excess network capacity they will be able to serve enterprises that require near global IoT connectivity, rapidly and with little effort, to realize new revenue streams. 

Nokia also plans to offer WING as a white-label product telcos and ISPs can use to create their own branded services. 

WING arrives at an interesting time for the IoT market, says Constellation Research VP and principal analyst Andy Mullholland.

"There are starting to be questions as to why some analysts' predictions of millions of interconnected IoT devices within a couple of years hasn't happened," Mulholland says. "My simple reply is that the supporting telecommunications infrastructure offering the right services at the right price is still largely lacking. This announcement from Nokia puts another building block in place technically, but together with other technology elements such as LoRa it still has to be rolled out by telecoms." 

"We seem to have the chicken and egg problem as to which comes first," he adds. "Is the lack of suitable infrastructure holding back demand, or is the demand not there for these new services? Meanwhile, the Intranet of Things continues to be rolled out within Enterprises in support of operational improvement."

Have a few minutes to spare? Take Constellation's CIO Priorities SurveyConstellation will send you a summary of the results. 

 
Tech Optimization Chief Information Officer

Google's Mega-Scale Database, Cloud Spanner, Is Now in Beta

Google's Mega-Scale Database, Cloud Spanner, Is Now in Beta

Constellation Insights

Google has made a long-anticipated move with the beta launch of Cloud Spanner, its globally distributed relational database that has powered many of its mega-scale consumer services for years. Here are the key details from Google's announcement:

When building cloud applications, database administrators and developers have been forced to choose between traditional databases that guarantee transactional consistency, or NoSQL databases that offer simple, horizontal scaling and data distribution. Cloud Spanner breaks that dichotomy, offering both of these critical capabilities in a single, fully managed service.

Cloud Spanner keeps application development simple by supporting standard tools and languages in a familiar relational database environment. It’s ideal for operational workloads supported by traditional relational databases, including inventory management, financial transactions and control systems, that are outgrowing those systems.

With Cloud Spanner, your database scales up and down as needed, and you'll only pay for what you use. It features a simple pricing model that charges for compute node-hours, actual storage consumption (no pre-provisioning) and external network access. 

For regional deployments, Spanner costs $0.90 per node per hour, with $0.30 per GB of storage per month. There are also charges for network egress. Multi-region pricing will be released soon. 

One early customer kicking Spanner's tires is supply-chain software vendor JDA, which sees Spanner as ideal for handling massive amounts of IoT data while providing high availability. 

While a newly released service, it seems safe to say Spanner has already been battle-tested at the highest levels. Internally at Google, it handles tens of millions of queries each second, and powers the likes of AdWords. 

Google has come up with simple and elastic pricing for Spanner and there are clear use cases for it, says Constellation Reseach VP and principal analyst Doug Henschen. "Spanner uniquely delivers global scalability with consistency for demanding financial services, advertising, retail and supply chain applications requiring synchronous replication," he says. "If there’s one weakness, it’s that Cloud Spanner does not support complicated ormultiple simultaneous reads and writes within single transactions. Still, it uniquely offers the always-available traits of scalable NoSQL options such as Cassandra but with the strong consistency of traditional relational databases."

It's worth noting that Spanner is the inspiration for CockroachDB, an open-source database being developed by a number of former Google employees. CockroachDB is still in beta but the startup has been working on version one for a few years now. Its not clear how CockroachDB will fare against Cloud Spanner, given the engineering and marketing resources Google can bring to bear, but the presence of an open-source alternative is a welcome one and could see parent company Cockroach Labs become a tantalizing acquisition target for Google's competitors in cloud infrastructure.

Have a few minutes to spare? Take Constellation's 2017 Digital Transformation SurveyConstellation will send you a summary of the results. 

Tech Optimization Chief Information Officer

Spark Gets Faster for Streaming Analytics

Spark Gets Faster for Streaming Analytics

Spark Summit East highlights progress on machine learning, deep learning and continuous applications combining batch and streaming workloads.

Despite challenges including a new location and a nasty Nor’easter that put a crimp on travel, Spark Summit East managed to draw more than 1,500 attendees to its February 7-9 run at the John B. Hynes Convention Center in Boston. It was the latest testament to growing adoption of Apache Spark, and the event underscored promising developments in areas including machine learning, deep learning and streaming applications.

The Summit had outgrown last year’s east coast home at the New York Hilton, but the contrast between those cramped quarters and the cavernous Hynes made comparison difficult. As I wrote of last year’s event, the audience was technical, and if anything, this year’s agenda seemed more how-to than visionary. There were fewer keynotes from big enterprise adopters and more from vendors.

spark-progress-2016

Mataei Zaharia of Databricks recapped Spark progress last year, highlighting growing adoption
and performance improvements in areas including streaming data analysis.

The Summit saw plenty of mainstream talks on SQL and machine learning best practices as well as more niche topics, such “Spark for Scalable Metagenomics Analysis” and “Analysis Andromeda Galaxy Data Using Spark.” Standout big-picture keynotes included the following:

Mataei Zaharia, the founder of Spark and chief technology officer at Databricks, gave an overview of recent progress and coming developments in the open source project. The centerpiece of Zaharia’s talk concerned maturing support for continuous applications requiring simultaneous analysis of both historical and streaming, real-time information. One of the many use cases is fraud analysis, where you need to continuously compare the latest, streaming information with historical patterns in order to detect abnormal activity and reject possibly fraudulent transactions in real time.

Spark already addressed fast batch analytics, but support for streaming was previously limited to micro-batch (meaning up to seconds of latency) until last February’s Spark 2.0 release. Zaharia said even more progress was made with December’s Spark 2.1 release with advances on Structured Streaming, a new, high-level API that addresses both batch and stream querying. Viacom, an early beta customer, is using Structured Streaming to analyze viewership of cable channels including MTV and Comedy Central in real time while iPass is using it to continuously monitor WiFi network performance and security.

Alexis Roos, a senior engineering manager at Salesforce, detailed the role of Spark in powering the machine learning, natural language processing and deep learning behind emerging Salesforce Einstein capabilities. Addressing the future of artificial intelligence on Spark, Ziya Ma, a VP of Big Data Technologies at Intel, offered a keynote on “Accelerating Machine Learning and Deep Learning at Scale with Apache Spark.” James Kobielus of IBM does a good job of recapping Deep Learning progress on Spark in this blog.

Ion Stoica, executive chairman of Databricks, picked up where Zaharia left off on streaming, detailing the efforts of UC Berkeley’s RISELab, the successor of AMPLab, to advance real-time analytics. Stoica shared benchmark performance data showing advances promised by Apache Drizzle, a new streaming execution engine for Spark, in comparison with Spark without Drizzle and streaming-oriented rival Apache Flink.

Stoica stressed the time- and cost-saving advantages of using a single API, the same execution engine and the same query optimizations to address both streaming and batch workloads. In a conversation after his keynote, Stoica told me Drizzle will likely debut in Databricks’ cloud-based Spark environment within a matter of weeks and he predicted that it will show up in Apache Spark software as soon as the third quarter of this year.

The Apache Drizzle execution engine being developed by RISELabs promises better
streaming query performance as compared to today’s Spark or Apache Flink.

MyPOV of Spark Progress

Databricks is still measuring Spark success in terms of number of contributors and number of Spark Meetup participants (the latter count is 300,000-plus, according to Zaharia), but to my mind, it’s time to start measuring success by mainstream enterprise adoption. That’s why I was a bit disappointed that the Summit’s list of presenters in the CapitalOne, Comcast, Verizon and Walmart Labs mold was far shorter than the list of vendors and Internet giants like Facebook and Netflix presenting.

Databricks says it now has somewhere north of 500 organizations using its hosted Spark Service, but I suspect the bulk of mainstream Spark adoption is now being driven by the likes of Amazon (first and foremost) as well as IBM, Google, Microsoft and others now offering cloud-based Spark services. A key appeal of these sources of Spark is the availability of infrastructure and developer services as well as broader analytical capabilities beyond Spark. Meanwhile, as recently as last summer I heard Cloudera executives assert that the company’s software distribution was behind more Spark adoption than that of any other vendor.

In a though-provoking keynote on “Virtualizing Analytics,” Arsalan Tavakoli, Databricks’ VP of customer engagement, dismissed Hadoop-based data lakes as a “second-generation” solution challenged by disparate and complex tools and access limited to big data developer types. But Tavakoli also acknowledged that Spark is only “part of the answer” to delivering a “new paradigm” that decouples compute and storage, provides uniform data management and security, unifies analytics and supports broad collaboration among many users.

Indeed, it was telling when Zaharia noted that 95% of Spark users employ SQL in addition to whatever else they’re doing with the project. That tells me that Spark SQL is important, but it also tells me that as appealing as Spark’s broad analytical capabilities and in-memory performance may be, it’s still just part of the total analytics picture. Developers, data scientists and data engineers that use Spark are also using non-Spark options ranging from the prosaic, like databases and database services and Hive, to the cutting edge, such as emerging GPU- and high-performance-computing-based options.

As influential, widely adopted, widely supported and widely available as Spark may now be, organizations have a wide range of cost, latency, ease-of-development, ease-of-use and technology maturity considerations that don’t always point to Spark. At least one presentation at Spark Summit cautioned attendees not to think of Spark Streaming, for example, as a panacea for next-generation continuous applications.

Spark is today where Hadoop was in 2010, as measured by age, but I would argue that it’s progressing more quickly and promises wider hands-on use by developers and data scientists than that earlier disruptive platform.

Related Reading:
Spark Summit East Report: Enterprise Appeal Grows
Spark On Fire: Why All The Hype?

Have a few minutes to spare? Take Constellation's 2017 Digital Transformation SurveyConstellation will send you a summary of the results. 

 
Media Name: Drizzle Streaming Latency.jpg
Data to Decisions Tech Optimization Hadoop AI ML Machine Learning LLMs Agentic AI Generative AI Robotics Analytics Automation Cloud SaaS PaaS IaaS Quantum Computing Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service developer Metaverse VR Healthcare Supply Chain Leadership business Marketing finance Customer Service Content Management Chief Information Officer Chief Digital Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Getting ready for Slack news

Getting ready for Slack news

Future of Work Chief People Officer Off <iframe src="https://player.vimeo.com/video/201880237?badge=0&autopause=0&player_id=0" width="1280" height="720" frameborder="0" title="Getting ready for Slack news" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>

CEN Member Chat: The Intersection of Digital Marketing & Sales Effectiveness

CEN Member Chat: The Intersection of Digital Marketing & Sales Effectiveness

Cindy Zhou, Constellation Research VP & Principal Analyst, covers digital marketing and sales effectiveness and the key market trends. She discusses where the lines between sales and marketing are blurring and the increasing power of the customer. 

If you are not a Constellation Executive Network member yet, join our analysts in this private community to talk shop and solve business problems in real time. 

On <iframe src="https://player.vimeo.com/video/201753472?badge=0&autopause=0&player_id=0" width="1280" height="720" frameborder="0" title="CEN Member Chat Nov 2016 CZhou final" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>

NRF Retail's Big Show 2017 Event Report - How Not to Get "Amazon'd"

NRF Retail's Big Show 2017 Event Report - How Not to Get "Amazon'd"

It’s been four years since I attended the National Retail Federation’s (NRF) annual conference, Retail’s Big Show. With over 35,000 attendees this year, the sector is going strong albeit the overall mood seemed more serious compared to the last time I attended. With a disappointing holiday season for Macy’s, Sears, and The Limited (all announcing store closings and shifting investments online) and the rising success of Amazon, what is the retailer of the future supposed to do? 
 
Throughout the show, the infusion of tech and retail dominated the sessions, exhibitors, and displays. Artificial Intelligence (AI), Data Driven Insights, Robotics, and Cross-Channel Commerce were all hot topics at the event.  A few standout examples include 1-800-FLOWERS presenting at SAS’s booth on how they utilize advanced data analytics from SAS Analytics to effectively cross-market their portfolio of diverse brands to offer customers a unified gifting experience with an integrated loyalty program. Wipro’s booth showcased a retail store experience demo utilizing beacons and sensors to provide detailed product information and comparisons when the items are picked up. I also met with NTT Data to discuss their Customer Friction Factor (CFF) formula, which helps retailers understand the level of friction that exists in the customer journey and creates a score in which to benchmark and measure improvement.
 
*Image: Wipro Store Experience Demo
 
One consistent point of discussion and on the mind of retailer’s this year was how to avoid getting “Amazon’d”. Yes, “Amazon’d" is now a term in our lexicon representing the commerce disruption exemplified by the tech giant’s dominance online. According to Slice Intelligence, Amazon pulled in 37% of US online sales for the 2016 holiday season. In several of the sessions I attended or conversations at the event, Amazon was a consistent topic. Big data pricing company 360pi hosted a session focused on their popular holiday report and shared thoughts on Amazon’s pricing and product assortment strategies.
 
A few lessons from Amazon’s success this holiday season:
 
  • Fast-follow on pricing - According to 360pi, Amazon closely monitored the online pricing of Walmart and Target as examples to apply pricing changes quickly and multiple times a day.
     
  • Offer Broad Assortments - Offered an assortment of items from their marketplace of sellers for customers and leveraging algorithms for related products. 360pi reported Amazon proper increased product offering by 30% while their marketplace sellers increased product assortment by 17.5%.
     
  • Utilize data for an edge - Knowing the customer and integrating their browsing behavior and purchase history to personalize the shopping experience and quickly react to pricing changes.
     
  • Reduce Friction in the Checkout Process - Amazon simplified the ordering process with their dash buttons, quick checkout options, and Alexa exclusive deals (sometimes too easy with the multiple voice ordering issue).
     
  • Provide a consistent cross-channel experience - My mobile marketing research report showed that 60% of US mobile users own 2 or more mobile devices. Customers are moving across devices to research and shop for products and expect retailers to provide a consistent experience cross-channels. Amazon provides a great cross-channel experience regardless if the customer shops on their laptop, mobile device, tablet, or their Amazon Echo.
I enjoyed my time at this year’s event and believe that the retailers that will win in this age of AI, data, and mobile will be the ones that successfully integrate online and offline customer behavior, stay focused on the customer experience, and reduce the friction that still exists in commerce.
 
*Cover image credit: National Retail Federation
 
Marketing Transformation Matrix Commerce Revenue & Growth Effectiveness Next-Generation Customer Experience Innovation & Product-led Growth Tech Optimization Future of Work Data to Decisions Event Report AI ML Machine Learning Generative AI Analytics Automation B2B B2C CX EX Employee Experience business Marketing SaaS PaaS Growth Cloud Digital Transformation eCommerce Enterprise Software CRM ERP Leadership Social Customer Service Content Management Collaboration Customer Experience Supply Chain Disruptive Technology Enterprise IT Enterprise Acceleration Next Gen Apps IoT Blockchain finance M&A Enterprise Service Executive Events Chief Customer Officer Chief Marketing Officer Chief Digital Officer Chief Revenue Officer Chief Data Officer Chief Executive Officer Chief Financial Officer Chief Growth Officer Chief Information Officer Chief Product Officer Chief Technology Officer Chief Supply Chain Officer

She Started It! Documentary of Women Tech Entrepreneurs

She Started It! Documentary of Women Tech Entrepreneurs

1

Last Tuesday, I had the pleasure of offering the welcome at the Santa Clara University screening of the award-winning documentary, She Started It. The film follows five women over two years as they launch, build, shut down, sell, and start again. We see them as they pitch VCs on the phone, on stage (e.g., 500 Startups), and in offices. It's a global perspective, taking us from San Francisco to Mississippi, France, and Vietnam. Cameos from luminaries like White House CTO Megan Smith; GoldieBlox CEO Debbie Sterling; and Ruchi Sanghvi, the first female engineer at Facebook, provide a broad context for the events.

More than a Film

From the Grace Hopper Conference to Santa Clara University, the film is often the start of a great conversation with leading women founders, investors, and mentors. At our Santa Clara Screening we were honored to have:

Some of My Key Moments from the Panel

A comment from a parent saying she will never tell her kids to "be realistic again." This is in response to the clear tension parents can feel between keeping their children safe, and letting them challenge the status quo as they strive to build a business.

Hearing the responsibility we all have for sharing all the culture and process that makes Silicon Valley support innovation and venture creation; the "share it forward" approach. I like to say that "SCU brings Silicon Valley to the world," as we host students from around the world for their Silicon Valley immersions and degrees. The idea that we all have a responsibility is one that I look forward to supporting. 

The importance of asking for help. Again, this is one that I think was an eye-opener for many in the audience. I tend to see this in a slightly different form when running negotiation workshops -- we have to share that it is critical to ask for what you want. ...and to negotiate for what you want. I'm not sure we're seeing the shift we need to in women negotiating their job offers (versus men doing it as a matter of course). 

The energy from one of our CAPE (California Program for Entrepreneurship) participants when she learned of the possibility of deferred legal payments. Debra Vernon later shared:

One key takeaway is that founders should know about [the fee deferral] option and ask for it and see what the lawyer can do to help. In my practice, I have also developed some fixed priced packages to accelerate formation and fundraising that help startups stay on budget.

The diversity of the audience. Entrepreneurs, and those with an entrepreneurial mindset, from ages 7 to 70; men; women; investors; educators; and other mentors.

The beautiful flow of the conversation and the willingness to share ups as well as downs.

Acknowledgements

My biggest thanks go to Leavey School of Business Dean’s Executive Professor, Tanya Monsef Bunger for bringing this film to campus. Tanya is the program director of Santa Clara’s Global Fellows program, Outgoing Chair of the Global Women’s Leadership Network, as well as being an active leader in many other national and international organizations supporting women and entrepreneurship. Our thanks also to Santa Clara University's Leavey School of Business and the School of Engineering's Frugal Innovation Hub for sponsoring this event.

Sites and Organizations Shared During the Panel

Chief Executive Officer

Blockchain plain and simple

Blockchain plain and simple

Blockchain is an algorithm and distributed data structure designed to manage electronic cash without any central administrator. The original blockchain was invented in 2008 by the pseudonymous Satoshi Nakamoto to support Bitcoin, the first large-scale peer-to-peer crypto-currency, completely free of government and institutions. 

Blockchain is a Distributed Ledger Technology (DLT).  Most DLTs have emerged in Bitcoin's wake. Some seek to improve blockchain's efficiency, speed or throughput; others address different use cases, such as more complex financial services, identity management, and "Smart Contracts".  

The central problem in electronic cash is Double Spend.  If electronic money is just data, nothing physically stops a currency holder trying to spend it twice. It was long thought that a digital reserve was needed to oversee and catch double-spends, but Nakamoto rejected all financial regulation, and designed an electronic cash without any umpire. 

The Bitcoin (BTC) blockchain crowd-sources the oversight. Each and every attempted spend is broadcast to a community, which in effect votes on the order in which transactions occur. Once a majority agrees all transactions seen in the recent past are unique, they are cryptographically sealed into a block. A chain thereby grows, each new block linked to the previously accepted history, preserving every spend ever made. 

A Bitcoin balance is managed with an electronic wallet which protects the account holder's private key. Blockchain uses conventional public key cryptography to digitally sign each transaction with the sender's private key and direct it to a recipient's public key. The only way to move Bitcoin is via the private key: lose or destroy your wallet, and your balance will remain frozen in the ledger, never to be spent again. 

The blockchain's network of thousands of nodes is needed to reach consensus on the order of ledger entries, free of bias, and resistant to attack. The order of entries is the only thing agreed upon by the blockchain protocol, for that is enough to rule out double spends.  

The integrity of the blockchain requires a great many participants (and consequentially the notorious power consumption). One of the cleverest parts of the BTC blockchain is its incentive for participating in the expensive consensus-building process. Every time a new block is accepted, the system randomly rewards one participant with a bounty (currently 12.5 BTC). This is how new Bitcoins are minted or "mined". 

Blockchain has security qualities geared towards incorruptible cryptocurrency. The ledger is immutable so long as a majority of nodes remain independent, for a fraudster would require infeasible computing power to forge a block and recalculate the chain to be consistent.  With so many nodes calculating each new block, redundant copies of the settled chain are always globally available. 

Contrary to popular belief, blockchain is not a general purpose database or "trust machine".  It only reaches consensus about one specific technicality – the order of entries in the ledger – and it requires a massive distributed network to do so only because its designer-operators choose to reject central administration. For regular business systems, blockchain's consensus is of questionable benefit.

Matrix Commerce Tech Optimization Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Distillation Aftershots Blockchain AI Chief Executive Officer Chief Financial Officer Chief Supply Chain Officer Chief Digital Officer Chief Information Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

2017 Is The Year Integration Enables Industry 4.0 Growth

2017 Is The Year Integration Enables Industry 4.0 Growth

1
  • industry-40-landscape35% of companies adopting Industry 4.0 predict revenue gains over 20% in the next five years.
  • Data analytics and digital trust are the foundations of Industry 4.0.
  • Cost-sensitive industries including semiconductors, electronics, and oil and gas are the most focused on adopting Industry 4.0, with 80% of companies in these industries saying it is one of their top priorities.

The recent article by Boston Consulting Group (BCG), Sprinting To Value In Industry 4.0, provides insights into how real-time integration between enterprise systems is an essential catalyst for Industry 4.0 growth. Industry 4.0 focuses on the end-to-end digitization of all physical assets and integration into digital ecosystems with value chain partners encompassing a broad spectrum of technologies. BCG surveyed 380 US-based manufacturing executives and managers at companies representing a wide range of sizes in various industries to complete the study.

Industry 4.0 Is  At An Inflection Point Today 

Having attained initial results from Industry 4.0 initiatives, many manufacturers are moving forward with the advanced analytics and Big Data-related projects that are based on real-time integration between CRM, ERP, 3rd party and legacy systems. A recent Price Waterhouse Coopers (PwC) study of Industry 4.0 adoption, Industry 4.0: Building The Digital Enterprise (PDF, no opt-in, 36 pp.) found that 72% of manufacturing enterprises predict their use of data analytics will substantially improve customer relationships and customer intelligence along the product life cycle. Real-time integration enables manufacturers to more effectively serve their customers, communicate with suppliers, and manage distribution channels. Of the many innovative start-ups taking on the complex challenges of integrating cloud and on-premise systems to streamline revenue-generating business processes, enosiX shows potential to bridge legacy ERP and cloud-based CRM systems quickly and deliver results.

There are many more potential benefits to adopting Industry 4.0 for those enterprises who choose to create and continually strengthen real-time integration links across the global operations. Recent research completed by Boston Consulting Group and PwC highlight several of them below:

  • Manufacturers expect to gain the greatest value from Industry 4.0 by reducing manufacturing costs (47%), improving product quality (43%) and attaining operations agility (42%). 89% of all manufacturers see an opportunity to use Industry 4.0 to improve manufacturing productivity. Reducing supply chain costs (37%), enabling product innovation (33%) and attaining faster time-to-market (31%) are the next level of benefits manufacturers expect to attain. The following graphic provides an analysis of where manufacturers see Industry 4.0 having the greatest impact on their organizations.

 industry-40-image-1

  • Manufacturers are gaining the greatest value from Industry 4.0 by creating pilot projects that create flexible, agile real-time platforms supporting new business models with real-time integration. Industry 4.0’s focus on enabling end-to-end digitization of all physical assets and integration into digital ecosystems relies on real-time integration to succeed. For manufacturers in cost-sensitive industries, the urgency of translating the vision of digital transformation into results is key to their future growth. The more competitively intense an industry, the more essential real-time integration

industry-40-image-2

  • Investing in greater digitization and support for enterprise-wide integration is predicted to increase 118% by 2020 in support of Industry 4.0. 33% of manufacturers surveyed report they have a high level of digitization today, projected to increase to 72% by 2020. The leading areas of these investments include vertical value chain integration (72%), product development and engineering (71%), and customer access including sales channels and marketing (68%).
  • New product development and optimizing existing products and services are the greatest areas of growth potential for analytics and Big Data using Industry 4.0 technologies and integration strategies through 2020. Industry 4.0 is revolutionizing the use of analytics and manufacturing intelligence, setting the foundation for greater optimization of overall business and control, better manufacturing, and operations planning, greater optimization of logistics and more efficient maintenance of production assets and machinery. By better orchestrating these strategic areas, manufacturers are going to be able to attain levels of accuracy and responsiveness to customers not achievable before.
  • Globally, manufacturing enterprises expect to gain an additional 2.9% in digital revenues per year through 2020, with digitizing their existing product portfolios (47%) leading all other strategies, further underscoring the need for real-time integration. Introducing an entirely new digital product portfolio is the second most common strategy (44%) followed by creating and offering new digital services to external customers (42%). Just over a third (38%) plan to create and sell big data analytics services to external customers.
Tech Optimization SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer