Results

Launch Report - When BW 7.4 meets HANA it is like 2 + 2 = 5 - but is 5 enough?

Launch Report - When BW 7.4 meets HANA it is like 2 + 2 = 5 - but is 5 enough?

I was invited to attend the BI / HANA 2014 event organized by WisPubs in Orlando this week. I have blogged on my keynote takeaways from yesterday here, today the conference continued with a separate BW 7.4 launch event.

 


As usual it’s good to remember how we got here…




A brief history of BW

Reporting has always been a challenge for enterprise application vendors. And when SAP was busy building R/3 in the early 90ies of last century, speed to build out a functionally complete ERP package was of the essence. Reporting was implemented in a similar way like in R/2 – which meant the company missed the data ware house trend. No one was unhappier about it than SAP co-founder Hasso Plattner, who frustrated about a combination of lack of understanding and progress hired an outsider (than a disruptive talent decision for SAP) with Klaus Kreplin. And Kreplin and his team delivered a solid data warehouse, originally called BiW (dropped later for not confusing it with another, but minor other German software company) in very short time. Not surprisingly SAP came to realize that being the largest business application vendor, it was not enough to just deliver a data warehouse, but customers expected extractions and content in the data warehouse. So SAP created the BW content releases. Then followed a long phase of different front end tools, the Business Objects acquisition happened and customers took up BW – to the tune of around 14000 today.




Enters HANA

Meanwhile, on the roots of the BW text search engine TREX, the PTime acquisition and the Sybase acquisition HANA was created, and with that some confusion started. Customers were using BW – but hearing from SAP that the traditional separation of OLTP and OLAP was history. Some predicted the end of BW. Of course that did not happen, too many customers, luckily don’t go away overnight, as well as a resilient and large ecosystem of partners.

And as we all know by now – HANA is the platform on which SAP has embarked in a massive journey of re-inventing itself. Consequently, BW also has to run on HANA and that was achieved with BW 7.3 – which in the aftermath was more of a ‘proof it works’ release. For the first time SAP let its ecosystem try and play with a key revenue product, with the BW 7.3 on HANA trial, with very good success. But after technology adoption, the interesting thing is what happens next, and that is what we can start evaluating with BW 7.4 on HANA.




Why it’s 2 + 2 = 5!

Let’s look at greatest drivers for enterprise synergy of the combining the two products:




  • Speed (from HANA) – No question HANA contributes speed to traditional BW implementations. Traditional BW – like all data warehouses – needed some attentive hand holding to remain a responsive system that users could use. Not impossible, but the watch had to be 24x7. Getting speed without having to design a classic star schema design is another advantage.
     
  • Simplification (from HANA) – The simplification of being able to run OLAP and OLTP on the same system, in columnar format has been described much before. But also a key data warehouse process, operated in the ETL layer – has pretty much gone. All data is there – normalized, with no need for transport and massaging.
     
  • Content (from BW) – Remember the BW history, the first versions were technically great – but lacked content. BW has more enterprise content than most enterprises can and want to digest – so HANA instantly gains significant content.
     
  • Governance (from BW) – Another lesson for all data warehouses – skipped for brevity above – is that you can’t surface the insight to just anyone in the enterprise. SAP spend a lot of time in the early 2000s learning that and building appropriately for it – and now it’s practically a gift to HANA.


So where does the synergy – the 5 come from? Well, it’s the combination of above that enables faster insights, build on a modern application architecture (yes you can use it on an iPad) and that allows enterprise decision makers get to data faster and hopefully with that find the insights to make the right decisions.




But is 5 enough?

Getting a 5 from a 2 plus 2 equation – 25% headroom is a good result. Especially with BW 7.4 being the first release beyond 7.3, which as mentioned before was focused mainly on  ‘getting there’. But 5 can only be enough if the insights are packaged in the right way, that a business user can digest them. And SAP has made a good acquisition with KXEN, but the road to packaged analytical applications, consumable by the business end users – remains a long one. To be fair SAP has only started. And 5 can only be enough if sufficient relevant information is available to make the right decision – needless to say that in 2014 it begs the question on how SAP will address the NoSQL / Hadoop challenge. [Added] And the indication on the latter is, that SAP is moving into a co-existence scenario between data in HANA and data in Hadoop in BW 7.4,, allowing the combination via the smart data access functionality. 



 

MyPOV

SAP has delivered a promising ‘first’ functional release with BW 7.4 on HANA. It is good progress – but more needs to happen in the next releases to be able to feed (true) analytical insights to business end users – cutting out data scientist engagement on a project level. Of course there is plenty of ‘bread and butter’ BI to have that BW addresses well, but the quest needs to be for the ‘holy’ grail – end user consumable analytical applications, that are good enough to foster insights, without much or any IT and data scientist involvement.

To be fair – no one has gotten there (yet). It’s probably going to be a score of 8 or 9 that is required. So for now, 5 is a good start.

 

Tech Optimization Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work SAP Oracle SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

News Analysis – Google gets serious about the cloud

News Analysis – Google gets serious about the cloud

Google had its widely anticipated cloud event in San Francisco, and it for sure did not disappoint. Developer focus, tools and significant price reductions were expected. Probably the biggest surprise was the temporary downtime of the life stream. In a twist of irony Google was demonstrating the live migration of a ‘hot’ VM while streaming HD video shortly before that event. Streaming recovered well for the rest of the event – if anything it shows that even Google’s cloud is earthly and that there was massive interest to watch the event. 

So let’s dissect Urs Hölzle’s blog post – which pretty much serves as a Google press release:

[…]Industry-leading, simplified pricing
The original promise of cloud computing was simple: virtualize hardware, pay only for what you use, with no upfront capital expenditures and lower prices than on-premise solutions. But pricing hasn’t followed Moore's Law: over the past five years, hardware costs improved by 20-30% annually but public cloud prices fell at just 8% per year.

MyPOV – This is a new mantra for cloud pricing. While it was only pay for what you use – down to minute or even second – Google is looking at the underlying mechanism that makes all computing more affordable, Moore’s Law. And kudos to Google for calling out the profit accumulation most providers have been entertaining to a certain point, as cloud price reduction have not been keeping step with the cost reduction seen in hardware. In an industry already feeling the cost pinch by Amazon’s retail DNA – it is now Google calling out that the existing cost reduction drive may not even have been aggressive enough. And we knew already that Google is serious as it dropped its consumer pricing for 100 GB of storage below a Google Cloud use of the same amount of storage – till today. We’ll look into the commercial dynamics how we think Google enables this price reduction later.

We think cloud pricing should track Moore’s Law, so we’re simplifying and reducing prices for our various on-demand, pay-as-you-go services by 30-85%:



  • Compute Engine reduced by 32% across all sizes, regions, and classes.
  • App Engine pricing is drastically simplified. We've lowered pricing for instance-hours by 37.5%, dedicated memcache by 50% and Datastore writes by 33%. In addition, many services, including SNI SSL and PageSpeed are now offered to all applications at no extra cost.
  • Cloud Storage is now priced at a consistent 2.6 cents per GB. That’s roughly 68% less for most customers.
  • Google BigQuery on-demand prices reduced by 85%.


MyPOV – Google is showing the application of Moore’s Law and significantly reducing prices. The good folks up in Seattle will check if this is factual – but it looks to me as the biggest price reduction we have seen in the public cloud. Where AWS follows its retail DNA of smallish cost reduction – mimicking the always sale strategy seen with some brick and mortar retailers – Google is giving away one year of cost savings. And that’s the most interesting insight here – as any beyond 30% price reduction shows that Google may have pocketed some extra profits, too. And there is nothing negative with it by the way – be price competitive and have a good margin to protect yourself against upcoming price wars – is a very viable and probably the only public cloud vendor price and business strategy. Some colleagues have already pointed out that Google is now the most cost effective cloud provider for the highly demanded high memory instance category. If Google keeps that cost leadership, it will create a very viable alternative to Amazon in regards of next generation, computing intensive in memory applications category. And needless to say – the storage reductions make Google more attractive to enterprises building and using the cloud than it costs end users to use it. It has to be like that to foster and grease an ISV ecosystem.

Sustained-Use discounts
In addition to lower on-demand prices, you’ll save even more money with Sustained-Use Discounts for steady-state workloads. Discounts start automatically when you use a VM for over 25% of the month. When you use a VM for an entire month, you save an additional 30% over the new on-demand prices, for a total reduction of 53% over our original prices.

MyPOV – This is probably the most innovative move by Google on the commercial side of the public cloud since a long time - if not ever. The key benefit of cloud in regards of elasticity of load becomes a disadvantage when the load stabilizes so much, that an originally elastic loud becomes a static load. Ultimately that’s a good sign for software vendors, as they want to grow their business and hand in hand with that comes a more stable load profile. In technical reality that load profile – always assuming a neatly scaling application architecture - realizes itself in VMs becoming static, meaning they run 24x7. And the commercial consequence is, that this VM becomes more expensive than a non VM load. There are numerous cases of software vendors starting out in the public cloud, but once loads have stabilized, moved their load to an on premise, dedicated data center environment. Google (and all other public cloud vendors) don’t want to see that – so major credit to Google for making this commercially less attractive to do. And to a certain point it is fair – less needs to happen at a cloud provider when VMs become dedicated, so passing along some of these cost savings to customers for the ‘loyalty’ is actually good business sense. Setting the usage threshold at 25% and the maximum saving to an additional 30 percentage points will be parameters I’d say we will see more action on in the future. And I leave it to some tech pundits to speculate on the underlying Google architecture – what are the savings Google sees and how much of it passes along with the 30 percentage points.

Finally it confirms Google’s commitment to the VM – there are (at least for now) no ambitions or plans visible for anything in the bear metal field. And clearly Google is not interested in reserving instances for a multi year deal. Enterprises like these options though – as they give them cost certainty. But Google will rightly argue that an enterprise can gain similar certainty with a 3 year sustained usage. With the upside (in contrast to Amazon) that price reductions (Moore’s Law anyone) will take the cost down through the three years. An argument I expect enterprises will be open to – after some explaining.

With our new pricing and sustained use discounts, you get the best performance at the lowest price in the industry. No upfront payments, no lock-in, and no need to predict future use.

MyPOV – The key emphasis has to be on – no need to predict the future use. As Churchill said, predictions are always tricky, especially concerning the future [freely quoted]. And many cloud users see themselves in that situation at the beginning of the billing cycle – how many dedicated instances will we need for the next month. Google takes away that challenge, which will be greatly appreciated. It also moves the value proposition from ‘pay by the glass’ closer to ‘all you can eat’. Someone will do the math and keep load on a VM for some minutes or even hours longer in order to lock-in the full discount. An easier decision to make than predicting what you need.

Making developers more productive in the cloud
We’re also introducing features that make development more productive:



  • Build, test, and release in the cloud, with minimal setup or changes to your workflow. Simply commit a change with git and we’ll run a clean build and all unit tests.
  • Aggregated logs across all your instances, with filtering and search tools.
  • Detailed stack traces for bugs, with one-click access to the exact version of the code that caused the issue. You can even make small code changes right in the browser.

We’re working on even more features to ensure that our platform is the most productive place for developers. Stay tuned.

MyPOV – Needless to say that making developers more productive is a main draw to specific clouds. And Google has picked an attractive set of firsts round DevOps / DeBug functions to get the attention of the development community. Having seen a lot of troubled software products, the automated unit tests are a valuable feature. Probably it is also a self preservation mechanism for Google – as noting too crazy can happen through the code. But it is also good to see that Google extends the same services, that its internal developers have, to their cloud customers.

Introducing Managed Virtual Machines
You shouldn't have to choose between the flexibility of VMs and the auto-management and scaling provided by App Engine. Managed VMs let you run any binary inside a VM and turn it into a part of your App Engine app with just a few lines of code. App Engine will automatically manage these VMs for you.

MyPOV – Well this should be really called ‘AppEngine managed VMs’. With this Google addresses a long term critique and weakness of Google AppEngine that you could not break out of it. And as much as that is intended from a stability perspective – it limits the scope of the apps that can build on AppEngine. Now developers can access C libraries, and local (Google calls them native) resources. But the AppEngine is in charge as it enables and controls the managed VM.

Expanded Compute Engine operating system support
We now support Windows Server 2008 R2 on Compute Engine in limited preview and Red Hat Enterprise Linux and SUSE Linux Enterprise Server are now available to everyone.

MyPOV – This is a huge win for Google that before operated only on the two more exotic Linux variants. Now decision makers not only have access to the two most popular Linux versions with RHEL and SUSE – but also Windows Server 2008. Both will face less concern by corporate IT decision makers, as well as mainstream minded CTOs at application vendors. Lastly given the managed VM capability – a number of local resource options that developers are familiar with and rely on – become available for AppEngine.

Real-Time Big Data
BigQuery lets you run interactive SQL queries against datasets of any size in seconds using a fully managed service, with no setup and no configuration. Starting today, with BigQuery Streaming, you can ingest 100,000 records per second per table with near-instant updates, so you can analyze massive data streams in real time. Yet, BigQuery is very affordable: on-demand queries now only cost $5 per TB and 5 GB/sec reserved query capacity starts at $20,000/month, 75% lower than other providers. […]

MyPOV – An aggressive move by Google in the BigData field – the other providers being the usual suspects. They key takeaway though is, that Google wants a piece of the fast growing pie of BigData apps being built for the cloud.



Overall POV

A truly landmark point for the cloud, with Google laying down the cards. Reports say that Hölzle and team only switched focus to public cloud in January this year – if true a lot has been done in little time and the competition is warned. We will have to see if Hölzle’s team will be able to neglect the current largest customer – Google – for the next quarters, but the ambitions and hints during the events are there.

Historically – in statements made by Marissa Mayer (when still at Google) – Google has been the only cloud provider that openly stated, that access capacity should be given back to other developers (thanks for colleague @ReneBuest to remind me recently). All other cloud providers have – despite my probing – never admitted to it. And maybe Google won’t anymore either – but its good business practice. If a cloud provider has a very elastic cloud, why not commercialize excess capacity at very attractive, cost rates. If the bulk of the load is higher margin, you are still running a formidable business (check Google’s latest earnings). And as long as you grow your cloud capacity faster than public cloud demand is –well then that spare capacity is unlikely to see bottle necks, given the scale on which Google operates.

Over at Rightscale Hassan Hosseini has done a detailed level comparison between Google and Amazon. Google comes mostly out on top. It’s interesting that on the most price attractive scenario – a three year sustained usage of Google and a 3 year commitment for Amazon, AWS comes out slightly ahead. But of course with the 3 year commitment. It’s interesting as the 3 year commitment comes the closest to a cloud user to own their hardware – so it’s probably the closest price to the real cost to operate a cloud infrastructure.

All in all it is very good news for the overall cloud adoption. Enterprises will benefit from better and more cost effective software, application vendors will have more options where to move load, and developers have a cloud provider that has cache and cares for them. On the flipside Google’s approach is a developer centric cloud – many more things need to happen for Google to move traditional load such as existing commercial databases and enterprise applications to the cloud. Hoping for all these applications to be rebuild on the technologies available in Google cloud – will not be viable medium term strategy. But for now this is a great step by Google, we are eager to measure the stride length of the next one – same cadence, short or longer. For sure the startup audience is listening.


Here is the first of many videos of the event:






Lastly – this is the week of the cloud – yesterday Cisco announced its Intercloud, tomorrow Amazon has a cloud event, and on Thursday Microsoft has an announced press conference. I am sure the price strategists in Seattle and Redmond are crunching some numbers.

-------------

More on Google:



  • A tale of two clouds - Google and HP - read here
  • Why Google acquired Talaria - efficiency matters - read here

 
 

 

New C-Suite Tech Optimization Google Microsoft softlayer salesforce IBM amazon Oracle SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer

Calling for a uniform approach to card fraud

Calling for a uniform approach to card fraud

Abstract

The credit card payments system is a paragon of standardisation. No other industry has such a strong history of driving and adopting uniform technologies, infrastructure and business processes. No matter where you keep a bank account, you can use a globally branded credit card to go shopping in almost every corner of the world. Seamless convenience is underpinned by the universal Four Party settlement model, and a long-standing card standard that works the same with ATMs and merchant terminals everywhere.

So with this determination to facilitate trustworthy and supremely convenient spending everywhere, it's astonishing that the industry is still yet to standardise Internet payments. Most of the world has settled on the EMV standard for in-store transactions, but online we use a wide range of confusing and largely ineffective security measures. As a result, Card Not Present (CNP) fraud is growing unchecked. This article argues that all card payments should be properly secured using standardised hardware. In particular, CNP transactions should use the very same EMV chip and cryptography as do card present payments.

This blog is an edited extract from an article of the same name, first published in the Journal of Internet Banking and Commerce, December 2012, vol. 17, no.3.

Skimming and Carding

With "carding", criminals replicate stolen customer data on blank cards and use those card copies in regular merchant terminals. "Skimming" is one way of stealing card data, by running a card through a copying device when the customer isn't looking (but it's actually more common for card data to be stolen in bulk from compromised merchant and processor databases).

A magnetic stripe card stores the customer's details as a string of ones and zeroes, and presents them to a POS terminal or ATM in the clear. It's child's play for criminals to scan the bits and copy them to a blank card.

The industry responded to skimming and carding with EMV (aka Chip-and-PIN). EMV replaces the magnetic storage with an integrated circuit, but more importantly, it secures the data transmitted from card to terminal. EMV works by first digitally signing those ones and zeros in the chip, and then verifying the signature at the terminal. The signing uses a Private Key unique to the cardholder and held safely inside the chip where it cannot be tampered with by fraudsters. It is not feasible to replicate the digital signature without having access to the inner workings of the chip, and thus EMV cards resist carding.

Online Card Fraud

Conventional Card Not Present (CNP) transactions are vulnerable because, a lot like the old mag stripe cards, they rest on clear text cardholder data. On its own, a merchant server cannot tell the difference between the original card data and a copy, just as a terminal cannot tell an original mag stripe card from a criminal's copy.

Despite the simplicity of the root problem, the past decade has seen a bewildering patchwork of flimsy and expensive online payments fixes. Various One Time Passwords have come and gone, from scratchy cards to electronic key fobs. Temporary SMS codes have been popular but were recently declared unsafe by the Communications Alliance in Australia, a policy body representing the major mobile carriers.

"3D Insecure"

Meanwhile, extraordinary resources have been squandered on the novel "3D Secure" scheme (MasterCard "SecureCode" and "Verified by Visa"). 3D Secure take-up is piecemeal; it's widely derided by merchants and customers alike. It is often blocked by browsers; and it throws up odd looking messages that can appear like a phishing attack or other malfunction. Moreover, it upsets the underlying Four Party settlements architecture, slowing transactions to a crawl and introducing untold legal complexities.

So why doesn't the card payments industry go back to its roots, preserve its global Four Party settlement architecture and standards, and tackle the real issue?

Kill two birds with one chip

We could stop most online fraud by using the same chip technologies we deployed to kill off skimming.

It is technically simple to reproduce the familiar card-present user experience in a standard computer. It would just take the will of the financial services industry to make payments by smartcard standard. There are plenty of smartcard reader solutions on the market and indeed, many notebooks feature built-in readers. Demand for readers has grown steadily over the years, driven by the increasing normal use of smartcards for e-health and online voting in Eastern Europe and Asia.

And with dual interface and contactless smartcards, the interface options open right up. Most mobile devices now feature NFC or "Near Field Communications", a special purpose device-to-device networking capability, which until now has mostly been used to emulate a payment card. But NFC tablets and smartphones can switch into reader emulation mode, so as to act as a smartcard terminal. Other researchers have recently demonstrated how to read a smartcard via NFC to authenticate the cardholder to a mobile device.

As an alternative, the SIM or other "Secure Element" of most mobile devices could be used to digitally sign card transactions directly, in place of the card. That's essentially how NFC payment apps works for Card Present transactions - but nobody has yet made the leap to use smart phone hardware security for Card Not Present.

Conclusion: Hardware security

All serious payments systems use hardware security. The classic examples include SIM cards, EMV, the Hardware Security Modules mandated by regulators in all ATMs, and the Secure Elements of NFC devices. With well designed hardware security, we gain a lasting upper hand in the criminal arms race.

The Internet and mobile channels will one day overtake the traditional physical payments medium. Indeed, commentators already like to say that the "digital economy" is simply the economy. Therefore, let us stop struggling with stopgap Internet security measures, and let us stop pretending that PCI-DSS audits will stop organised crime stealing card numbers by the million. Instead, we should kill two birds with one stone, and use chip technology to secure both card present and CNP transactions, to deliver the same high standards of usability and security in all channels.

New C-Suite Digital Safety, Privacy & Cybersecurity Infosec Security Zero Trust Chief Customer Officer Chief Information Security Officer Chief Privacy Officer

Tom Hogan Joins Kony

Tom Hogan Joins Kony

Thomas-E-Hogan

Siebel veteran Thomas E. Hogan has been appointed chief executive officer of Kony, an enterprise mobile applications tools provider.  At Siebel Hogan was Senior Vice President of Global Sales and Operations.  Before joining Kony, Hogan served as executive vice president of Software at HP. Prior to joining HP, Hogan served as the president of Vignette, a publicly held software company specializing in enterprise content management.  Hogan started his career at IBM, where he  held a variety of executive posts. In his new role, Hogan will help Kony drive sales execution, achieve operational excellence, maintain product leadership, and facilitate geographical expansion.

“With more than 900 percent growth during the last three years, Kony is uniquely positioned in the enterprise mobility market,” said Raj Koneru, founder and chairman, Kony. “To capitalize on the opportunity it was time to bring in a new CEO to help scale the company.”

“It is a honor to succeed Raj Koneru as chief executive officer,” said Hogan.

Kony provides s cloud platform to support the mobile application software development lifecycle. The company’s customers include Fortune 1000 global banks, healthcare payers and providers, automotive, manufacturing, travel, hospitality and retail organizations, as well as a large global network of more than 150 partners.

The company has over 600 apps live, serving over 20 million end users in 45 countries. The Kony tools help define, design, develop, test, deploy, and manage multi-channel applications from a single codebase.

The company has an opening for a creative director on the careers page of its site.

Tech Optimization Chief Information Officer

New Analysis - Another week another Billion - Cisco Intercloud - A different approach to cloud – better late than never

New Analysis - Another week another Billion - Cisco Intercloud - A different approach to cloud – better late than never

Cisco entered the public cloud market today with announcing the world’s largest Intercloud (a new term – surprise).

 


Let’s look at the press release – and read between the lines a bit:

As businesses increasingly embrace private, public, and hybrid clouds to cost-effectively and quickly deliver business applications and services, Cisco today announced plans to build the world’s largest global Intercloud – a network of clouds – together with a set of partners. The Cisco global Intercloud is being architected for the Internet of Everything, with a distributed network and security architecture designed for high-value application workloads, real-time analytics, “near infinite” scalability and full compliance with local data sovereignty laws. The first-of-its-kind open Intercloud, which will feature APIs for rapid application development, will deliver a new enterprise-class portfolio of cloud IT services for businesses, service providers and resellers.

MyPOV – It’s easy to coin a new term ‘Intercloud’ – harder to explain it and then maintain it. Cisco explains it as the Intercloud being a network of clouds, fair enough, but what is it. The Latin ‘inter’ refers to between – so it’s the network between the clouds – powered in the cloud? It’s harder to make a new term stick – so we will see who else may pick up the new buzzword. Cisco deserves kudos for making this a pretty rich cloud announcement – as it includes security, real time analytics, is compliant with local data sovereignty laws (very interesting), supports RAD and is offered together with partners. The data sovereignty is an interesting feature to keep an eye on – and begs questions – who will create and maintain the legislative rules, who will enforce them and what does it mean for the applications running in the Intercloud?

Cisco expects to invest over $1 billion to build its expanded cloud business over the next two years. Its partner-centric business model, which enables partner capabilities and investments, is expected to generate a rapid acceleration of additional investment to drive the global scale and breadth of services that Cisco plans to deliver to its customers.

MyPOV – There we go – $1 billion – so stretched through 2 years and we will have to see how much partner investment it will trigger.

The company plans to deliver Cisco Cloud Services with and through Cisco partners. The following organizations, which are either planning to deliver Cisco Cloud Services or have endorsed Cisco’s global Intercloud initiative, represent a sampling of the kinds of global partners Cisco expects to work with to build its cloud business: leading Australian service provider Telstra; Canadian business communications provider Allstream; European cloud company Canopy, an Atos company; cloud services aggregator, provider and wholesale technology distributor Ingram Micro Inc.; global IT and managed services provider Logicalis Group; global provider of enterprise software platforms for business intelligence, mobile intelligence, and network applications MicroStrategy, Inc.; enterprise data center IT solutions provider OnX Managed Services; information availability services provider SunGard Availability Services; and leading global IT, consulting and outsourcing company Wipro Ltd.

MyPOV – It’s a first to launch a cloud service with so many partners. But it looks more of a collection of the weak than the strong. Orchestration will be a predictable challenge. Let’s measure what these partners will put up in investment in the coming quarters though, let’s give Cisco and them the benefit of the doubt. The biggest problem – none of the partners mentioned – with the exception of MicroStrategy may bring significant work load with them – and that’s what cloud get cost effective with. So where will the load come from?

[…]
The Cisco OpenStack-enabled Intercloud is designed to allow organizations and users to combine and move workloads – including data and applications – across different public or private clouds as needed, easily and securely, while maintaining associated network and security policies. It will also utilize Cisco Application Centric Infrastructure (ACI) to optimize application performance and to make rolling out new services much faster. Cisco will improve application security, compliance, auditing and mobility by using ACI’s centralized, programmable security policy to enable fine-grained control and isolation at scale; suitable for private and public cloud environments.

MyPOV – No surprise – this will be another OpenStack powered cloud, with the promise to combine workloads that we need to see delivered first in the real world before we fully believe it. No surprise ACI is being used – the question is how much work will partners and customers have to do adopt ACI and how willing will they do that – given other public clouds may not ask them to do that step. And building an ACI compliant application / load may hinder its transport to other OpenStack clouds – though hopefully not the Intercloud operating partners.

[…]
Cisco Cloud Services expand on the Cisco industry-leading cloud portfolio, which already includes SaaS offerings, such as WebEx®, Meraki® and Cisco Cloud Web Security; differentiated cloud services, such as hosted collaboration and cloud DVR; and technologies and services to build public and private clouds, such as the Cisco Unified Computing System™ (Cisco UCS®), integrated infrastructure solutions such as VCE Vblock™ Systems and NetApp FlexPod, and Cisco Application Centric Infrastructure (ACI).

MyPOV – Cisco brings already some applications to the offering, the WebEx product probably being the most popular – but that will not be enough load – despite being the most popular load used by enterprises. Would be nice not to see the download of the WebEx applet every time a session runs with a server on a different version. And then Cisco throws in pretty much all products it can claim for the cloud, including the result of the VMware partnership around VCE Vblock and NetApp Flexpod.

Cisco is expanding the Cisco Powered™ program to include Cisco Cloud Services. Cisco will sell these new services through channel partners and directly to end customers. Partners who develop Cisco Powered services can offer more cloud offerings faster, with lower up front development costs, and operate at cloud speed and scale. […]

MyPOV – This is the most interesting and differentiating area with a number of unique / significant capabilities. A RedHat OpenShift based PaaS, SAP HANA to run UCS, WebEx, DaaS (Cisco, VMware and Citrix) are the most interesting one and a bunch of more technical Cisco services. Collaboration as a Service (CaaS) struck me as one of the more unusual terms – but Cisco likes to call things a little different than the rest of the industry.




Overall POV

Cisco certainly comes late to the game, albeit with a different angle, the Intercloud. Certainly a fair angle for the leading network provider, who fittingly also announced a whole new set of networking equipment. But all public clouds need management between their data centers and that is what Cisco is really after. It’s unclear where all the Cloud Services will run – e.g. will WebEx run in the Intercloud (operated by who) or (only) in connected partner cloud (or both). So lots of questions remain. Also existing cloud providers may go an order Cisco network gear with a little more consideration going forward.

On the bright side, Cisco deserves credit to come out today, catch the early lead in what shapes to be a key week for the cloud (Google on Tuesday, Amazon on Wednesday and Microsoft on Thursday) – so better late than never. And the services are rich and differentiating. But lots of questions on the hybrid and partner based approach to cloud remain.



 

New C-Suite Tech Optimization Innovation & Product-led Growth Data to Decisions Future of Work Digital Safety, Privacy & Cybersecurity softlayer cisco systems SAP vmware Google IBM amazon Oracle SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer

Event Report - BI2014 and HANA2014 takeaways – It’s all about HANA & Lumira - but is that enough?

Event Report - BI2014 and HANA2014 takeaways – It’s all about HANA & Lumira - but is that enough?

I had the opportunity to attend the keynote presentation at the SAP BI2014 / HANA2014 conference in Orlando, organized by Wispubs. This used to the BusinessObject event and has slowly morphed into a SAP BI event – and this year – nor surprisingly – a HANA event. With 1800 guests attending it is a key event for the SAP BI community.

 

 

So here are my top 3 takeaways from the keynote:

 

 

 

 

 

  • The disruption message has arrived and (of course) HANA solves it – The red thread of Steve Lucas keynote was all around business disruption triggered by technology. And of course how SAP technology helps companies to be a disruptor and how that on the flipside can help them to be disrupted. But then I am sure SAP will also be happy to help enterprises that have been disrupted – granted they can still come up with the payment for the new software technology.

    Lucas laid out how companies like Uber, WhatsApp and AirBnB have disrupted conventional businesses and achieve amazing valuations. Few observers may have noticed that WhatsApp has disrupted SAP’s Sybase 365 customers on the messaging side – but kudos for the openness and WhatsApp is certainly a poster child story not to be missed. Unfortunately the disruptive element came short in the customer stories presented in the keynote – with New South Wales Police and Fire, Velux and SpiritAero. We asked the same question in the analyst Q&A and Amit Sinha came back with a valid example of Italian tire manufacturer Pirelli selling tire usage data with the help of HANA. Definitively innovative and potentially disruptive for the competition.

The SAP HANA Platform

  •  Lumira looms – The not so brilliantly named SAP BI product is making good progress – being used in two of the keynote demos. First to visualize the brackets of the current NCAA basketball tournament, a good actual high involvement example, that of course only jelled well with a North American audience aware of the tournament. We also saw new infographic capability in Lumira, which is a nice addition of functionality to enable storytelling. It’s a first version –e .g. we missed annotations – but a promising start. Now we can only hope SuccessFactors and Lumira developers will speak and cooperate and use common assets of Lumira Storytelling and SuccessFactors Presentations functionality. The combination is a high potential solution.

    And Lumira is becoming more and more the replacement and go to product for older, former Business Objects products – as new functionality (e.g. Design Studio) is being built here – replacing e.g. the still popular Xcelsius.

 

Lucas and colleagues in the midst of IoT demo

 

  • HANA dominates – As expected – it is virtually impossible to get a new product from SAP or to build an innovative solution – without getting to use HANA. And as Lucas shared, this is to a certain point by design. If you want to build a mobile solution, well in the backend you will have HANA – like t or not. This certainly makes sense for SAP from a sales perspective – and even from a technology re-use perspective – but not all use cases of innovative applications require an in memory database. Just think of Hadoop based BigData scenarios. Mobile apps extending legacy. Social apps (probably Jam is HANA free at this point). Etc. SAP needs to be careful not to limit growth of some of its technology products for the sake of HANA integration.

 

SAP did a good job showing an Internet of Things (IoT) demo – tying together huge data volumes with personalization and predictive delivery and maintenance. Nice showcase.

 

MyPOV

A good start to the BI2014 / HANA2014 event that confirms HANA’s pivotal role. Lumira is getting better and I expect it to soon replace all former BO products, not that SAP is saying that officially any time soon. The general concerns I have around HANA (elasticity, programming language) are not addressed, but Sapphire is the event for that, not BI2014. It looks like the SAP technology products (+/- 50% of the SAP license revenue) are doing well. My concern is that SAP did not bring the full platform package – the HANA Cloud Platform (HCP) to this event – but enterprises want to build rich analytical applications. A missed opportunity. And not surprisingly – the HANA vs Hadoop relationship remains in the field of unknown forces avoiding each other.

I am still onsite for another 24 hours and will follow up with another post around more briefings and meetings setup here at BI2014 / HANA2014.

 

Tech Optimization Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work SAP Hadoop SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing finance Healthcare Customer Service Content Management Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

ManageEngine Launches Siebel Application Performance Monitoring

ManageEngine Launches Siebel Application Performance Monitoring

Manageengine

ManageEngine is expanding its portfolio of performance management tools to include Siebel. Its Applications Manager product will add support for Oracle Siebel, providing operational data to Siebel Administrators to ensure high availability and performance of Siebel applications.

ManageEngines-dashboardThe company will also be announcing new versions of its other products  OpManager, a network
and data center infrastructure management solution; Desktop Central, a desktop and mobile device management (MDM) solution; as well as Applications Manager.

ManageEngine will be exhibiting its new products along with the rest of its portfolio in booth 2327 at Interop Las Vegas being held March 31–April 4, 2014, at Mandalay Bay Convention Center in Las Vegas.

Worldwide, the company claims 90,000 customers — including most of the Fortune 500 — use its products to ensure the optimal performance of their critical IT infrastructure, including networks, servers, applications, and desktops. Another 300,000 plus administrators use the free editions of ManageEngine products. ManageEngine is a division of Zoho. To date, Zoho.com has launched 25+ online applications — from CRM to Mail, Office Suite, Project Management, Invoicing, Web conferencing .With offices in CA, Austin, Chennai, Yokohama and Beijing, Zoho Corporation serves the technology needs of more than 9 million customers worldwide.

The company can be followed on Twitter at @ManageEngine.

Tech Optimization Oracle Chief Information Officer

Most Wearable Devices Will Fail and the Name of the Category Will Change From Wearables to Sense-ables

Most Wearable Devices Will Fail and the Name of the Category Will Change From Wearables to Sense-ables

1

If you were at CES, you could not have missed a new category of computing called "wearables." This category of devices can be described as the FitBit gone mad. Wearables currently come in three main categories: health trackers, watches and glasses. In each of these categories some if not all devices are pivoting to solving the world's biggest health problems.

Almost daily, I see a new wearable device launched, and while they all are minimally viable products, they continually get sillier and sillier. We are seeing everything from wearable necklaces (like necklaces were never wearable) earrings, shoes, clothing and many other bodily accruements being outfitted with small computers/biosensors, low voltage needs and high connectivity. Like clockwork, every new device no matter how silly, calls out to the world with press releases, tweets, YouTube videos and multiple pounds of the manufacturing firm's proverbial digital chest reckoning how disruptive some new wearable product is.

My observation is that we have bastardized the word disruption. Most wearables are disturbing mankind under the once well-intended charter of disruption.

While a minority of humans continue to wear these devices past the first few months of purchase, most folks (like myself) stop wearing after the nostalgia has worn off. I gave up my FitBit after about six months, my pebble watch in about six days and my Google GLASS, well I got over that bad boy in about six hours. I got over them the same way I got over my first CASIO watch, which doubled as a calculator in high school; said watch plus calculator was disturbing my life. Disruption does not have to disturb.

Good disruption is change without disturbance.

The hypothesis is simple, wearing something on my body that is not confortable, fashionable and delivering more value than it disturbs me is not a sustainable value proposition. So the big question is what will become of wearables? Clearly the movement of computing to the edge of the network will continue, and the connecting of things/biosensors that are not computers (Internet of Things) will continue. Wearables currently position themselves as trying to solve health's biggest problems.

Well, do I need to wear the solution to health's biggest problems 24/7, or can the solution simply sense me daily/weekly?

The solutions will become sensible, and may be called sense-ables

Just recently, Singularity University wrote on Forbes.com about the "new generation of revolutionary biosensors that contain the power of clinical lab instruments in packages that are light, small, wireless and highly efficient." The Human API calls for "Sensoring" all suggesting it is the sensing and sensors that matter, not where or how they are embedded/worn/adorned.

During a Hacking the Future episode, John Nosta from Forbes.com and I landed on the construct of "implantables" or more eloquently as John coined it "dermals." Implantables and/or dermals will do the complete opposite of what current wearables do, instead of "disrupting/disturbing" they will be dormant, unnoticeable, behind the scenes and sensing passively. Experts will argue that the problem with sense-ables are they do not provide a "stream" of 24/7 health information, instead they will probably provide a single point of time (SPOT) measure of health.

Here are some examples of sense-ables.

 

  • Cars -- the steering wheels, the seats.

  • Bathrooms -- mirrors, toilets, toothbrushes, shower drains (for heavens sake we have scales in there already).

  • Bedrooms -- pillows, mattresses, sheets.

  • Offices -- chairs, pens.

  • Pharmacies -- think about a "sensing room" where you go in and submit your data in 2-5 minutes.


So what about "streaming health" thought?

More and more we incorrectly call out for a "stream" of health information. I am guilty of this as well, here is an outdate version of my thinking around "streams" of health information. But do we need health data to "stream" to solve health's biggest problems? Are we over-solving? Over-engineering?

We are getting the data part of heath completely wrong.



  1. For most, a stream of personal health information from a clinical perspective is only marginally more valuable than a SPOT read of your vitals daily, or weekly (unless you need ICU type monitoring). This means embedded sensors that can sense me daily or weekly are enough to improve health exponentially without me having to "wear" a lab on me designed to stream my health.

  2. More data does not automatically mean more likely personalized medicine. Yes, we are seeing the call for me-dicine, but outside of some therapeutic classes such as oncology and a handful of others, we may never get to personalized medicine; we may never need to. We may get to personalized medicine for groups, demographics, genders, age and body types, but the commercial investment to make a pain killer designed with my "health fingerprint" in mind is just silly.

  3. The data from the human body is imperfect, and we need to focus on developing cognitive computing that "heals" the health data from human bodies before we can use it to make clinical conclusions. Otherwise, we will have false diagnosis driving global hypochondria. Until then, where the power of data from the sick can be beneficial, is where we can study health data from crowds (or in crowds) and use analytics to extrapolate from crowds (or of crowds). We will only benefit from an "index of health data" from a crowd over a stream from a person until we develop cognitive computing to heal imperfect health information sourced from imperfect human bodies.

  4. Innovations that see massive adoption eventually driving revolutions are those that help the lion's share of society. The mobile phone drove a revolution of hyper-adoption because it is difficult to debate that it does not improve every human life it touches. My CASIO watch plus calculator from high school never drove a revolution because its adoption was ring-fenced to a small population. Most human beings will live exactly the same lives we live today, even if they wore a hundred wearable devices daily, the value of wearing a lab on the body to stream health information for exponential health value is ring-fenced to a small (but albeit unfortunately unhealthy or sick) population of society. Most of us only need SPOT measures of our health to improve our health outcomes exponentially.


There are simply too many simpler ways to solve the problem wearables are all trying to solve -- which is capturing a stream of imperfect health information and hoping for personalized medicine as a result for every human, and too many holes in the argument that they will drive a revolution. Fad for sure.

Be sensible -- build a sense-able, not a wearable.

 

New C-Suite Innovation & Product-led Growth Google Chief Customer Officer Chief Marketing Officer

Briefings this week: March 24 - 28

Briefings this week: March 24 - 28

1

Here's who I'm speaking with this week:

Monday

Tuesday - Wednesday

Friday

  • BearingPoint
  • LittleBird

As a reminder, I'm interested in hearing from companies that enable customer experience management, provide marketing services (including agencies and consultancies) and support innovation agenda items.

If you are interested in briefing Constellation Research on your marketing technology, visit the Contact Us form.

 

 

 

Marketing Transformation Chief Executive Officer Chief Marketing Officer

Research Report: Digital ARTISANs - The Seven Building Blocks Behind Building A Digital Business DNA

Research Report: Digital ARTISANs - The Seven Building Blocks Behind Building A Digital Business DNA

Shift to Digital Businesses Requires A Transformation Of Leadership And Organizational DNA

The discussion about digital business often goes deep into the five pillars of digital technologies.  In fact, the convergence of these pillars have spawned the latest and trendiest iterations of technology from enabling the sharing economy to 3D printers to wearables that drive sensor and analytical ecosystems.  As organizations contemplate how these broad based digital business trends will disrupt existing business models, leaders can apply Constellation’s Futurist Framework and consider dimensions from the political, economic, societal, technological, environmental, and legislative (PESTEL) angles.  However, even after much planning, astute CXO’s from market leading and fast follower organizations quickly realize that technology and process alone is not enough to transform their organization’s DNA inside the organization.

It’s Still The People, Stupid!

Despite robots potentially taking over by 2020 (snark), people still play a key role in the success of digital business transformation.  In the shift from selling products and services to promising outcomes and experiences, information flows faster.  Every node and person in the digital network must react more quickly, yet also needs to be more intelligent.  Success comes faster but so does failure.  Thus, both the seduction of massive success and the fear of facing massive failure provides a great catalyst to design, influence, infuse, or transplant the proper digital DNA.

The DNA Of Digital Artisans Blend The Intelligence Of Quant Jocks With The Co Innovation Skills Of The Creative Class

Organizations must assess their innate ability to thrive in a digital business environment.  These skills go not only beyond the quant jocks who deliver hard science and engineering prowess, but also beyond the creative class who can co innovate and co-create on demand.  Consequently, organizations are rethinking the attributes a digital business should employ and embody.

The Bottom Line: Rise Of Digital Artisans Required For Organizational Transformation

Short of having every leader emerge as the Chief Digital Officer, the new war for talent will focus on attracting, developing, and retaining digital artisans.  Concurrently, a market will develop for  those who can spread the digital business gospel and infuse digital artistry into organizations.  While there are many attributes a digital business should embody, seven building blocks behind digital ARTISANS embody the digital DNA required for success:

  • (A) Authentic: stay true to the organization’s mythology and brand
  • (R) Relevant: deliver contextual personalization at scale
  • (T) Transparent: operate with an understanding that everything eventually becomes public
  • (I) Intelligent: adapt to self learning, smart systems, that anticipate need
  • (S) Speedy: infuse responsiveness in digital time
  • (A) Analytical: democratize decision making with all types of data
  • (N) Non-conformist: espouse disruptiveness in the creation and innovation of new ideas

Your POV.

Ready for digital disruption?  Do you have a digital artisan DNA?  How did you get there? Add your comments to the blog or reach me via email: R (at) ConstellationR (dot) com or R (at) SoftwareInsider (dot) org.

Please let us know if you need help with your Digital Business transformation efforts. Here’s how we can assist:

  • Developing your digital business strategy
  • Building a Digital ARTISAN program
  • Connecting with other pioneers
  • Sharing best practices
  • Vendor selection
  • Implementation partner selection
  • Providing contract negotiations and software licensing support
  • Demystifying software licensing
Resources

Reprints

Reprints can be purchased through Constellation Research, Inc. To request official reprints in PDF format, please contact Sales .

Disclosure

Although we work closely with many mega software vendors, we want you to trust us. For the full disclosure policy,stay tuned for the full client list on the Constellation Research website.

* Not responsible for any factual errors or omissions.  However, happy to correct any errors upon email receipt.

Copyright © 2001 -2014 R Wang and Insider Associates, LLC All rights reserved.
Contact the Sales team to purchase this report on a a la carte basis or join the Constellation Customer Experience

 

Data to Decisions Future of Work Marketing Transformation Matrix Commerce New C-Suite Next-Generation Customer Experience Tech Optimization Innovation & Product-led Growth Revenue & Growth Effectiveness SoftwareInsider Leadership ML Machine Learning LLMs Agentic AI Generative AI Robotics AI Analytics Automation B2B B2C CX EX Employee Experience HR HCM business Marketing Metaverse developer SaaS PaaS IaaS Supply Chain Quantum Computing Growth Cloud Digital Transformation Disruptive Technology eCommerce Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Social Healthcare VR CCaaS UCaaS Customer Service Content Management Collaboration M&A Enterprise Service Chief Customer Officer Chief Executive Officer Chief People Officer Chief Information Officer Chief Marketing Officer Chief Digital Officer Chief Technology Officer Chief Data Officer Chief Analytics Officer Chief Financial Officer Chief Operating Officer Chief Revenue Officer Chief Information Security Officer Chief Experience Officer