Results

Spinnaker Support Expands To India

Spinnaker Support Expands To India

Spinnaker-support
 

In an expansion was driven by the company’s continued growth, Spinnaker Support, a provider of Siebel, SAP and JD Edwards third-party maintenance and services,  has opened a technology center in Mumbai, India. Staff within this office will act as second-line support for the front-line operations staff located in the United States, United Kingdom, and Singapore.

India

The effort to open the Spinnaker Support Global Technology Center (GTC) began after a significant growth in new customers, increased interest in its tax and regulatory compliance services offering, and a surge of interest in Southeast Asia all combining to trigger the need for an additional facility and resources.

The Spinnaker Support GTC is located in the Powai section of Mumbai. The office will be led by Maurice D'Souza, a seasoned IT executive whose business acumen spans sales and marketing, operations management, business strategy, strategic alliances, and people management. D’Souza brings extensive experience to his role having spent 25 years with SYSTIME (now KPIT), a global IT consulting and product engineering company.

“I am extremely pleased to announce our expansion into India. This direction has been on the corporate roadmap for a couple of years,” said Matt Stava, CEO and Managing Principal of Spinnaker Support. 

Headquartered in Denver, Colorado, Spinnaker Support provides services  from offices located in Cape Town, Denver, London, Mumbai, and Singapore. Current clients include Fortune 1000 and mid-sized enterprises from the Americas, Europe, Asia-Pacific, and South Africa operating in industries such as Manufacturing, Healthcare, Government, Retail, Food and Beverage, and Finance.

For more information about careers please call +1-877-476-0576, go to the careers page of the site, or email [email protected]

Tech Optimization Chief Financial Officer Chief Information Officer

Constellation Office Hours: Box IPO, HANA, Cisco InterCloud, Oracle Sales Cloud

Constellation Office Hours: Box IPO, HANA, Cisco InterCloud, Oracle Sales Cloud

Get inside the minds of Constellation analysts during Constellation Office Hours. Office Hours are casual conversations amongst Constellation analysts during which they discuss the latest developments in enterprise technology. At the conclusion of office hours analysts answer questions from the audience. This month, Holger Mueller, Alan Lepofsky, and J. Bruce Daley discussed:

  • Box IPO
  • Cisco Intercloud 
  • Cisco & Chrome partnership for UC
  • HANA
  • Lumira
  • Oracle Sales Cloud

Constellation Office Hours screen shot

1:35 - Constellation News: Peter Kim joins Constellation. 

2:06 - New Research: 

5:28 - Events we're attending

Industry News
6:27 - Alan Lepofsky: Box IPO, Cisco and Google partner for UC

9:15 - Holger Mueller: Box IPO, BW on HANA and Lumira, Google Cloud, Cisco Intercloud, 

13:40 - J. Bruce Daley: Oracle Sales Cloud v8

Big Ideas for Research
15:40 - Alan Lepofsky: It's not about age, it's about digital proficiency. A new framework for planning digital transformation.

17:40 - Holger Mueller: HCM and engagement strategies. Focusing on vertical; not horizontal career paths. PaaS.

20:00 - J. Bruce Daley: Oracle Sales Cloud as part of larger mobility study. Standardization across mobile platforms must occur, but it won't be on HTML5

 

Join us for Office Hours in April: http://constellationr.com/content/constellation-office-hours-april

 

 

Data to Decisions Future of Work Marketing Transformation Matrix Commerce New C-Suite Next-Generation Customer Experience Tech Optimization Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth box cisco systems SAP Oracle ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Customer Officer Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Marketing Officer Chief People Officer Chief Procurement Officer Chief Supply Chain Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

Effectively Manage Your Social & Mobile Workforce

Effectively Manage Your Social & Mobile Workforce

1

A major business transformation is brewing in the enterprise today. Enterprise mobility, business velocity, geographically dispersed, multi-device and multi-generational workforce are converging to deliver the promise of responsive organizations. Organizations that miss this paradigm shift will face dire consequences. How can you effectively manage this shift, ensure that it will be sustainable and reap the benefits of being a responsive organization? In this session you’ll learn how to apply practical steps and effective techniques to maximize SharePoint for your multi-device and multi-generational workforce

 

 
Related Resources:
 
 

New C-Suite Marketing Transformation Matrix Commerce Innovation & Product-led Growth SharePoint AR Chief Customer Officer Chief Marketing Officer Chief People Officer

Event Report - AWS Summit in SFO - AWS keeps doing what it has been working since 8 years...

Event Report - AWS Summit in SFO - AWS keeps doing what it has been working since 8 years...

We had the opportunity to attend the AWS Summit in San Francisco today, a well-attended event. Not surprisingly Amazon Web Services (AWS) can pull a lot of interest in the Bay Area, and most of the crowd was knowledgeable and using AWS products. It also became clear, that AWS Summits are more educational events for Amazon – not necessarily product announcement events – these are  more likely to happen aat is reserved to re:Invent.


So acknowledging that – and given the different nature of other cloud events this week – the event may have been disappointing at first – but then it had considerable punch, too. Here are my top 3 takeaways:
  • Breadth and Depth are the message – Through the keynote I kept picking up how Jassy keep pointing to the experience, track record and success of AWS – here are all the themes we picked up:
    • AWS is turning 8 – definitively the older sibling to some of the toddlers [adding for clarification – Google] and newborns [Cisco] of this week. 
    • AWS is the market leader – The Gartner magic quadrant and customer logo slides transported that message.
    • AWS is secure – Look at all our government security certificates (here is the latest from the US DoD).
    • AWS ships product – WorkSpaces available to all customers.
    • AWS keeps innovating – On track to beat the number of enhancements of 2013 already now. And AWS keeps bringing out new instance types for lower cost and better enabling next generation apps (HS1 and R3 instances).
    • AWS is most comprehensive – The almost 5 minute build up walking through the AWS tech stack served that message. And smart to put people services – with training – on top of it all.
 
    • AWS stays price competitive – AWS reduce prices the 42nd time – not a one-time move. 
      • AWS works well with your private cloud – More options to peer your private could with VPC. 

    • Enterprise is the battle, private cloud is the target – Needless to say that AWS showed good examples on the startup side using their products. Flipboard was certainly a great showcase for rapidly demanding massive scale and AWS enabling that.

      But in my view the coupe was certainly to have Infor CEO Charles Philips on stage. Most of the audience was not familiar with Infor and for many attendees if was even a questionable choice (Infor who?) – but I am sure back on the web decision makers were listening up: When the 3rd largest ERP vendor pulls a NetFlix [to speak AWS crowd language] that is certainly remarkable. Basically Infor is running its next generation applications, CloudSuite on AWS. It’s already using Redshift for analytics. But now critical enterprise resource data and processes will run on AWS. A huge departure from test, development and trial systems we have seen before. The SaaS by accident phenomena is now becoming real for enterprise processes running on AWS. But it makes sense as Philips pointed out – by average enterprise software systems are only utilized around 20%, perfect showcase for the cloud.
     
    • The other key message for the CIOs out there, was that AWS is getting more and more friendly to co-exist with the private and hybrid cloud. With many AWS competitors playing both on the private and public side it is clear that AWS does not want to give up that space all too easily and it was good to see how the private cloud segment gained in length from re:Invent. Of course price reductions are a favorable argument, too – not sure how many CIOs had to revisit their cost assumptions for their private cloud operations and plans. But I don’t think it was a few. When cost can no longer be a justification for private cloud, it will be for sure security concerns that will be raised – but AWS did a good job addressing these better, too. 

    • Price matters – Of course AWS lowered prices, it’s 42nd price reduction. And they are and were pretty substantial. But by now they are expected. Back at re:Invent an excited attendee crowd burst into applause – not so much today in San Francisco. How substantial they are and if AWS can keep its cost leadership position is unclear right now, we will see the analysis of that in the next days. But it was good enough to let existing AWS clients stay where they are and it certainly ensures AWS is a very attractive platform to build and run software on.
     

    MyPOV

    The cloud wars are only to start and AWS as the market leader is playing a smart game. Ignore what the competition has done the same week – hold the course. Do what you have been doing – innovate (AppStream, WorkSpaces and Kinesis were mentioned), deliver to the public (WorkSpaces is available to all customers) and reduce prices. Just stress a little more how established and what a safe choice AWS is – something Jassy and AWS certainly have pulled off today. An 8 year track record certainly helps. And signing up Infor is a huge confidence point for enterprise IT – and the backdoor to get in the enterprise.

    And even as Jassy did not mention any competition – a good move in my view – I am pretty sure that the retailer DNA of Amazon has a keen eye on what the competition does.

    For customers this is all great news. Compute resources have never been so cheap and they can expect for them to get cheaper. The revolution is now happening around long term procurement of these resources. Decision makers should take a hard look at compute procurement, as consumption based models have gotten dramatically more attractive in just… the last 24 hours. It will be hard for the vendors that sell compute resources on premises to react quickly on this. Too much margin is at play, if you are purchasing compute resource now – question that margin not once, but at least twice.


    ---------------

    Because of actual events - check out my take on Google Cloud Platform live here.

    ----------------

    More about AWS:
    • AWS  moves the yardstick - Day 2 reinvent takeaways - read here.
    • AWS powers on, into new markets - Day 1 reinvent takeaways - read here.
    • The Cloud is growing up - three signs in the News - read here.
    • Amazon AWS powers on - read here.

     

    Tech Optimization infor Google amazon SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer

    Launch Report - When BW 7.4 meets HANA it is like 2 + 2 = 5 - but is 5 enough?

    Launch Report - When BW 7.4 meets HANA it is like 2 + 2 = 5 - but is 5 enough?

    I was invited to attend the BI / HANA 2014 event organized by WisPubs in Orlando this week. I have blogged on my keynote takeaways from yesterday here, today the conference continued with a separate BW 7.4 launch event.

     


    As usual it’s good to remember how we got here…




    A brief history of BW

    Reporting has always been a challenge for enterprise application vendors. And when SAP was busy building R/3 in the early 90ies of last century, speed to build out a functionally complete ERP package was of the essence. Reporting was implemented in a similar way like in R/2 – which meant the company missed the data ware house trend. No one was unhappier about it than SAP co-founder Hasso Plattner, who frustrated about a combination of lack of understanding and progress hired an outsider (than a disruptive talent decision for SAP) with Klaus Kreplin. And Kreplin and his team delivered a solid data warehouse, originally called BiW (dropped later for not confusing it with another, but minor other German software company) in very short time. Not surprisingly SAP came to realize that being the largest business application vendor, it was not enough to just deliver a data warehouse, but customers expected extractions and content in the data warehouse. So SAP created the BW content releases. Then followed a long phase of different front end tools, the Business Objects acquisition happened and customers took up BW – to the tune of around 14000 today.




    Enters HANA

    Meanwhile, on the roots of the BW text search engine TREX, the PTime acquisition and the Sybase acquisition HANA was created, and with that some confusion started. Customers were using BW – but hearing from SAP that the traditional separation of OLTP and OLAP was history. Some predicted the end of BW. Of course that did not happen, too many customers, luckily don’t go away overnight, as well as a resilient and large ecosystem of partners.

    And as we all know by now – HANA is the platform on which SAP has embarked in a massive journey of re-inventing itself. Consequently, BW also has to run on HANA and that was achieved with BW 7.3 – which in the aftermath was more of a ‘proof it works’ release. For the first time SAP let its ecosystem try and play with a key revenue product, with the BW 7.3 on HANA trial, with very good success. But after technology adoption, the interesting thing is what happens next, and that is what we can start evaluating with BW 7.4 on HANA.




    Why it’s 2 + 2 = 5!

    Let’s look at greatest drivers for enterprise synergy of the combining the two products:




    • Speed (from HANA) – No question HANA contributes speed to traditional BW implementations. Traditional BW – like all data warehouses – needed some attentive hand holding to remain a responsive system that users could use. Not impossible, but the watch had to be 24x7. Getting speed without having to design a classic star schema design is another advantage.
       
    • Simplification (from HANA) – The simplification of being able to run OLAP and OLTP on the same system, in columnar format has been described much before. But also a key data warehouse process, operated in the ETL layer – has pretty much gone. All data is there – normalized, with no need for transport and massaging.
       
    • Content (from BW) – Remember the BW history, the first versions were technically great – but lacked content. BW has more enterprise content than most enterprises can and want to digest – so HANA instantly gains significant content.
       
    • Governance (from BW) – Another lesson for all data warehouses – skipped for brevity above – is that you can’t surface the insight to just anyone in the enterprise. SAP spend a lot of time in the early 2000s learning that and building appropriately for it – and now it’s practically a gift to HANA.


    So where does the synergy – the 5 come from? Well, it’s the combination of above that enables faster insights, build on a modern application architecture (yes you can use it on an iPad) and that allows enterprise decision makers get to data faster and hopefully with that find the insights to make the right decisions.




    But is 5 enough?

    Getting a 5 from a 2 plus 2 equation – 25% headroom is a good result. Especially with BW 7.4 being the first release beyond 7.3, which as mentioned before was focused mainly on  ‘getting there’. But 5 can only be enough if the insights are packaged in the right way, that a business user can digest them. And SAP has made a good acquisition with KXEN, but the road to packaged analytical applications, consumable by the business end users – remains a long one. To be fair SAP has only started. And 5 can only be enough if sufficient relevant information is available to make the right decision – needless to say that in 2014 it begs the question on how SAP will address the NoSQL / Hadoop challenge. [Added] And the indication on the latter is, that SAP is moving into a co-existence scenario between data in HANA and data in Hadoop in BW 7.4,, allowing the combination via the smart data access functionality. 



     

    MyPOV

    SAP has delivered a promising ‘first’ functional release with BW 7.4 on HANA. It is good progress – but more needs to happen in the next releases to be able to feed (true) analytical insights to business end users – cutting out data scientist engagement on a project level. Of course there is plenty of ‘bread and butter’ BI to have that BW addresses well, but the quest needs to be for the ‘holy’ grail – end user consumable analytical applications, that are good enough to foster insights, without much or any IT and data scientist involvement.

    To be fair – no one has gotten there (yet). It’s probably going to be a score of 8 or 9 that is required. So for now, 5 is a good start.

     

    Tech Optimization Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work SAP Oracle SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer

    News Analysis – Google gets serious about the cloud

    News Analysis – Google gets serious about the cloud

    Google had its widely anticipated cloud event in San Francisco, and it for sure did not disappoint. Developer focus, tools and significant price reductions were expected. Probably the biggest surprise was the temporary downtime of the life stream. In a twist of irony Google was demonstrating the live migration of a ‘hot’ VM while streaming HD video shortly before that event. Streaming recovered well for the rest of the event – if anything it shows that even Google’s cloud is earthly and that there was massive interest to watch the event. 

    So let’s dissect Urs Hölzle’s blog post – which pretty much serves as a Google press release:

    […]Industry-leading, simplified pricing
    The original promise of cloud computing was simple: virtualize hardware, pay only for what you use, with no upfront capital expenditures and lower prices than on-premise solutions. But pricing hasn’t followed Moore's Law: over the past five years, hardware costs improved by 20-30% annually but public cloud prices fell at just 8% per year.

    MyPOV – This is a new mantra for cloud pricing. While it was only pay for what you use – down to minute or even second – Google is looking at the underlying mechanism that makes all computing more affordable, Moore’s Law. And kudos to Google for calling out the profit accumulation most providers have been entertaining to a certain point, as cloud price reduction have not been keeping step with the cost reduction seen in hardware. In an industry already feeling the cost pinch by Amazon’s retail DNA – it is now Google calling out that the existing cost reduction drive may not even have been aggressive enough. And we knew already that Google is serious as it dropped its consumer pricing for 100 GB of storage below a Google Cloud use of the same amount of storage – till today. We’ll look into the commercial dynamics how we think Google enables this price reduction later.

    We think cloud pricing should track Moore’s Law, so we’re simplifying and reducing prices for our various on-demand, pay-as-you-go services by 30-85%:



    • Compute Engine reduced by 32% across all sizes, regions, and classes.
    • App Engine pricing is drastically simplified. We've lowered pricing for instance-hours by 37.5%, dedicated memcache by 50% and Datastore writes by 33%. In addition, many services, including SNI SSL and PageSpeed are now offered to all applications at no extra cost.
    • Cloud Storage is now priced at a consistent 2.6 cents per GB. That’s roughly 68% less for most customers.
    • Google BigQuery on-demand prices reduced by 85%.


    MyPOV – Google is showing the application of Moore’s Law and significantly reducing prices. The good folks up in Seattle will check if this is factual – but it looks to me as the biggest price reduction we have seen in the public cloud. Where AWS follows its retail DNA of smallish cost reduction – mimicking the always sale strategy seen with some brick and mortar retailers – Google is giving away one year of cost savings. And that’s the most interesting insight here – as any beyond 30% price reduction shows that Google may have pocketed some extra profits, too. And there is nothing negative with it by the way – be price competitive and have a good margin to protect yourself against upcoming price wars – is a very viable and probably the only public cloud vendor price and business strategy. Some colleagues have already pointed out that Google is now the most cost effective cloud provider for the highly demanded high memory instance category. If Google keeps that cost leadership, it will create a very viable alternative to Amazon in regards of next generation, computing intensive in memory applications category. And needless to say – the storage reductions make Google more attractive to enterprises building and using the cloud than it costs end users to use it. It has to be like that to foster and grease an ISV ecosystem.

    Sustained-Use discounts
    In addition to lower on-demand prices, you’ll save even more money with Sustained-Use Discounts for steady-state workloads. Discounts start automatically when you use a VM for over 25% of the month. When you use a VM for an entire month, you save an additional 30% over the new on-demand prices, for a total reduction of 53% over our original prices.

    MyPOV – This is probably the most innovative move by Google on the commercial side of the public cloud since a long time - if not ever. The key benefit of cloud in regards of elasticity of load becomes a disadvantage when the load stabilizes so much, that an originally elastic loud becomes a static load. Ultimately that’s a good sign for software vendors, as they want to grow their business and hand in hand with that comes a more stable load profile. In technical reality that load profile – always assuming a neatly scaling application architecture - realizes itself in VMs becoming static, meaning they run 24x7. And the commercial consequence is, that this VM becomes more expensive than a non VM load. There are numerous cases of software vendors starting out in the public cloud, but once loads have stabilized, moved their load to an on premise, dedicated data center environment. Google (and all other public cloud vendors) don’t want to see that – so major credit to Google for making this commercially less attractive to do. And to a certain point it is fair – less needs to happen at a cloud provider when VMs become dedicated, so passing along some of these cost savings to customers for the ‘loyalty’ is actually good business sense. Setting the usage threshold at 25% and the maximum saving to an additional 30 percentage points will be parameters I’d say we will see more action on in the future. And I leave it to some tech pundits to speculate on the underlying Google architecture – what are the savings Google sees and how much of it passes along with the 30 percentage points.

    Finally it confirms Google’s commitment to the VM – there are (at least for now) no ambitions or plans visible for anything in the bear metal field. And clearly Google is not interested in reserving instances for a multi year deal. Enterprises like these options though – as they give them cost certainty. But Google will rightly argue that an enterprise can gain similar certainty with a 3 year sustained usage. With the upside (in contrast to Amazon) that price reductions (Moore’s Law anyone) will take the cost down through the three years. An argument I expect enterprises will be open to – after some explaining.

    With our new pricing and sustained use discounts, you get the best performance at the lowest price in the industry. No upfront payments, no lock-in, and no need to predict future use.

    MyPOV – The key emphasis has to be on – no need to predict the future use. As Churchill said, predictions are always tricky, especially concerning the future [freely quoted]. And many cloud users see themselves in that situation at the beginning of the billing cycle – how many dedicated instances will we need for the next month. Google takes away that challenge, which will be greatly appreciated. It also moves the value proposition from ‘pay by the glass’ closer to ‘all you can eat’. Someone will do the math and keep load on a VM for some minutes or even hours longer in order to lock-in the full discount. An easier decision to make than predicting what you need.

    Making developers more productive in the cloud
    We’re also introducing features that make development more productive:



    • Build, test, and release in the cloud, with minimal setup or changes to your workflow. Simply commit a change with git and we’ll run a clean build and all unit tests.
    • Aggregated logs across all your instances, with filtering and search tools.
    • Detailed stack traces for bugs, with one-click access to the exact version of the code that caused the issue. You can even make small code changes right in the browser.

    We’re working on even more features to ensure that our platform is the most productive place for developers. Stay tuned.

    MyPOV – Needless to say that making developers more productive is a main draw to specific clouds. And Google has picked an attractive set of firsts round DevOps / DeBug functions to get the attention of the development community. Having seen a lot of troubled software products, the automated unit tests are a valuable feature. Probably it is also a self preservation mechanism for Google – as noting too crazy can happen through the code. But it is also good to see that Google extends the same services, that its internal developers have, to their cloud customers.

    Introducing Managed Virtual Machines
    You shouldn't have to choose between the flexibility of VMs and the auto-management and scaling provided by App Engine. Managed VMs let you run any binary inside a VM and turn it into a part of your App Engine app with just a few lines of code. App Engine will automatically manage these VMs for you.

    MyPOV – Well this should be really called ‘AppEngine managed VMs’. With this Google addresses a long term critique and weakness of Google AppEngine that you could not break out of it. And as much as that is intended from a stability perspective – it limits the scope of the apps that can build on AppEngine. Now developers can access C libraries, and local (Google calls them native) resources. But the AppEngine is in charge as it enables and controls the managed VM.

    Expanded Compute Engine operating system support
    We now support Windows Server 2008 R2 on Compute Engine in limited preview and Red Hat Enterprise Linux and SUSE Linux Enterprise Server are now available to everyone.

    MyPOV – This is a huge win for Google that before operated only on the two more exotic Linux variants. Now decision makers not only have access to the two most popular Linux versions with RHEL and SUSE – but also Windows Server 2008. Both will face less concern by corporate IT decision makers, as well as mainstream minded CTOs at application vendors. Lastly given the managed VM capability – a number of local resource options that developers are familiar with and rely on – become available for AppEngine.

    Real-Time Big Data
    BigQuery lets you run interactive SQL queries against datasets of any size in seconds using a fully managed service, with no setup and no configuration. Starting today, with BigQuery Streaming, you can ingest 100,000 records per second per table with near-instant updates, so you can analyze massive data streams in real time. Yet, BigQuery is very affordable: on-demand queries now only cost $5 per TB and 5 GB/sec reserved query capacity starts at $20,000/month, 75% lower than other providers. […]

    MyPOV – An aggressive move by Google in the BigData field – the other providers being the usual suspects. They key takeaway though is, that Google wants a piece of the fast growing pie of BigData apps being built for the cloud.



    Overall POV

    A truly landmark point for the cloud, with Google laying down the cards. Reports say that Hölzle and team only switched focus to public cloud in January this year – if true a lot has been done in little time and the competition is warned. We will have to see if Hölzle’s team will be able to neglect the current largest customer – Google – for the next quarters, but the ambitions and hints during the events are there.

    Historically – in statements made by Marissa Mayer (when still at Google) – Google has been the only cloud provider that openly stated, that access capacity should be given back to other developers (thanks for colleague @ReneBuest to remind me recently). All other cloud providers have – despite my probing – never admitted to it. And maybe Google won’t anymore either – but its good business practice. If a cloud provider has a very elastic cloud, why not commercialize excess capacity at very attractive, cost rates. If the bulk of the load is higher margin, you are still running a formidable business (check Google’s latest earnings). And as long as you grow your cloud capacity faster than public cloud demand is –well then that spare capacity is unlikely to see bottle necks, given the scale on which Google operates.

    Over at Rightscale Hassan Hosseini has done a detailed level comparison between Google and Amazon. Google comes mostly out on top. It’s interesting that on the most price attractive scenario – a three year sustained usage of Google and a 3 year commitment for Amazon, AWS comes out slightly ahead. But of course with the 3 year commitment. It’s interesting as the 3 year commitment comes the closest to a cloud user to own their hardware – so it’s probably the closest price to the real cost to operate a cloud infrastructure.

    All in all it is very good news for the overall cloud adoption. Enterprises will benefit from better and more cost effective software, application vendors will have more options where to move load, and developers have a cloud provider that has cache and cares for them. On the flipside Google’s approach is a developer centric cloud – many more things need to happen for Google to move traditional load such as existing commercial databases and enterprise applications to the cloud. Hoping for all these applications to be rebuild on the technologies available in Google cloud – will not be viable medium term strategy. But for now this is a great step by Google, we are eager to measure the stride length of the next one – same cadence, short or longer. For sure the startup audience is listening.


    Here is the first of many videos of the event:






    Lastly – this is the week of the cloud – yesterday Cisco announced its Intercloud, tomorrow Amazon has a cloud event, and on Thursday Microsoft has an announced press conference. I am sure the price strategists in Seattle and Redmond are crunching some numbers.

    -------------

    More on Google:



    • A tale of two clouds - Google and HP - read here
    • Why Google acquired Talaria - efficiency matters - read here

     
     

     

    New C-Suite Tech Optimization Google Microsoft softlayer salesforce IBM amazon Oracle SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer

    Calling for a uniform approach to card fraud

    Calling for a uniform approach to card fraud

    Abstract

    The credit card payments system is a paragon of standardisation. No other industry has such a strong history of driving and adopting uniform technologies, infrastructure and business processes. No matter where you keep a bank account, you can use a globally branded credit card to go shopping in almost every corner of the world. Seamless convenience is underpinned by the universal Four Party settlement model, and a long-standing card standard that works the same with ATMs and merchant terminals everywhere.

    So with this determination to facilitate trustworthy and supremely convenient spending everywhere, it's astonishing that the industry is still yet to standardise Internet payments. Most of the world has settled on the EMV standard for in-store transactions, but online we use a wide range of confusing and largely ineffective security measures. As a result, Card Not Present (CNP) fraud is growing unchecked. This article argues that all card payments should be properly secured using standardised hardware. In particular, CNP transactions should use the very same EMV chip and cryptography as do card present payments.

    This blog is an edited extract from an article of the same name, first published in the Journal of Internet Banking and Commerce, December 2012, vol. 17, no.3.

    Skimming and Carding

    With "carding", criminals replicate stolen customer data on blank cards and use those card copies in regular merchant terminals. "Skimming" is one way of stealing card data, by running a card through a copying device when the customer isn't looking (but it's actually more common for card data to be stolen in bulk from compromised merchant and processor databases).

    A magnetic stripe card stores the customer's details as a string of ones and zeroes, and presents them to a POS terminal or ATM in the clear. It's child's play for criminals to scan the bits and copy them to a blank card.

    The industry responded to skimming and carding with EMV (aka Chip-and-PIN). EMV replaces the magnetic storage with an integrated circuit, but more importantly, it secures the data transmitted from card to terminal. EMV works by first digitally signing those ones and zeros in the chip, and then verifying the signature at the terminal. The signing uses a Private Key unique to the cardholder and held safely inside the chip where it cannot be tampered with by fraudsters. It is not feasible to replicate the digital signature without having access to the inner workings of the chip, and thus EMV cards resist carding.

    Online Card Fraud

    Conventional Card Not Present (CNP) transactions are vulnerable because, a lot like the old mag stripe cards, they rest on clear text cardholder data. On its own, a merchant server cannot tell the difference between the original card data and a copy, just as a terminal cannot tell an original mag stripe card from a criminal's copy.

    Despite the simplicity of the root problem, the past decade has seen a bewildering patchwork of flimsy and expensive online payments fixes. Various One Time Passwords have come and gone, from scratchy cards to electronic key fobs. Temporary SMS codes have been popular but were recently declared unsafe by the Communications Alliance in Australia, a policy body representing the major mobile carriers.

    "3D Insecure"

    Meanwhile, extraordinary resources have been squandered on the novel "3D Secure" scheme (MasterCard "SecureCode" and "Verified by Visa"). 3D Secure take-up is piecemeal; it's widely derided by merchants and customers alike. It is often blocked by browsers; and it throws up odd looking messages that can appear like a phishing attack or other malfunction. Moreover, it upsets the underlying Four Party settlements architecture, slowing transactions to a crawl and introducing untold legal complexities.

    So why doesn't the card payments industry go back to its roots, preserve its global Four Party settlement architecture and standards, and tackle the real issue?

    Kill two birds with one chip

    We could stop most online fraud by using the same chip technologies we deployed to kill off skimming.

    It is technically simple to reproduce the familiar card-present user experience in a standard computer. It would just take the will of the financial services industry to make payments by smartcard standard. There are plenty of smartcard reader solutions on the market and indeed, many notebooks feature built-in readers. Demand for readers has grown steadily over the years, driven by the increasing normal use of smartcards for e-health and online voting in Eastern Europe and Asia.

    And with dual interface and contactless smartcards, the interface options open right up. Most mobile devices now feature NFC or "Near Field Communications", a special purpose device-to-device networking capability, which until now has mostly been used to emulate a payment card. But NFC tablets and smartphones can switch into reader emulation mode, so as to act as a smartcard terminal. Other researchers have recently demonstrated how to read a smartcard via NFC to authenticate the cardholder to a mobile device.

    As an alternative, the SIM or other "Secure Element" of most mobile devices could be used to digitally sign card transactions directly, in place of the card. That's essentially how NFC payment apps works for Card Present transactions - but nobody has yet made the leap to use smart phone hardware security for Card Not Present.

    Conclusion: Hardware security

    All serious payments systems use hardware security. The classic examples include SIM cards, EMV, the Hardware Security Modules mandated by regulators in all ATMs, and the Secure Elements of NFC devices. With well designed hardware security, we gain a lasting upper hand in the criminal arms race.

    The Internet and mobile channels will one day overtake the traditional physical payments medium. Indeed, commentators already like to say that the "digital economy" is simply the economy. Therefore, let us stop struggling with stopgap Internet security measures, and let us stop pretending that PCI-DSS audits will stop organised crime stealing card numbers by the million. Instead, we should kill two birds with one stone, and use chip technology to secure both card present and CNP transactions, to deliver the same high standards of usability and security in all channels.

    New C-Suite Digital Safety, Privacy & Cybersecurity Infosec Security Zero Trust Chief Customer Officer Chief Information Security Officer Chief Privacy Officer

    Tom Hogan Joins Kony

    Tom Hogan Joins Kony

    Thomas-E-Hogan

    Siebel veteran Thomas E. Hogan has been appointed chief executive officer of Kony, an enterprise mobile applications tools provider.  At Siebel Hogan was Senior Vice President of Global Sales and Operations.  Before joining Kony, Hogan served as executive vice president of Software at HP. Prior to joining HP, Hogan served as the president of Vignette, a publicly held software company specializing in enterprise content management.  Hogan started his career at IBM, where he  held a variety of executive posts. In his new role, Hogan will help Kony drive sales execution, achieve operational excellence, maintain product leadership, and facilitate geographical expansion.

    “With more than 900 percent growth during the last three years, Kony is uniquely positioned in the enterprise mobility market,” said Raj Koneru, founder and chairman, Kony. “To capitalize on the opportunity it was time to bring in a new CEO to help scale the company.”

    “It is a honor to succeed Raj Koneru as chief executive officer,” said Hogan.

    Kony provides s cloud platform to support the mobile application software development lifecycle. The company’s customers include Fortune 1000 global banks, healthcare payers and providers, automotive, manufacturing, travel, hospitality and retail organizations, as well as a large global network of more than 150 partners.

    The company has over 600 apps live, serving over 20 million end users in 45 countries. The Kony tools help define, design, develop, test, deploy, and manage multi-channel applications from a single codebase.

    The company has an opening for a creative director on the careers page of its site.

    Tech Optimization Chief Information Officer

    New Analysis - Another week another Billion - Cisco Intercloud - A different approach to cloud – better late than never

    New Analysis - Another week another Billion - Cisco Intercloud - A different approach to cloud – better late than never

    Cisco entered the public cloud market today with announcing the world’s largest Intercloud (a new term – surprise).

     


    Let’s look at the press release – and read between the lines a bit:

    As businesses increasingly embrace private, public, and hybrid clouds to cost-effectively and quickly deliver business applications and services, Cisco today announced plans to build the world’s largest global Intercloud – a network of clouds – together with a set of partners. The Cisco global Intercloud is being architected for the Internet of Everything, with a distributed network and security architecture designed for high-value application workloads, real-time analytics, “near infinite” scalability and full compliance with local data sovereignty laws. The first-of-its-kind open Intercloud, which will feature APIs for rapid application development, will deliver a new enterprise-class portfolio of cloud IT services for businesses, service providers and resellers.

    MyPOV – It’s easy to coin a new term ‘Intercloud’ – harder to explain it and then maintain it. Cisco explains it as the Intercloud being a network of clouds, fair enough, but what is it. The Latin ‘inter’ refers to between – so it’s the network between the clouds – powered in the cloud? It’s harder to make a new term stick – so we will see who else may pick up the new buzzword. Cisco deserves kudos for making this a pretty rich cloud announcement – as it includes security, real time analytics, is compliant with local data sovereignty laws (very interesting), supports RAD and is offered together with partners. The data sovereignty is an interesting feature to keep an eye on – and begs questions – who will create and maintain the legislative rules, who will enforce them and what does it mean for the applications running in the Intercloud?

    Cisco expects to invest over $1 billion to build its expanded cloud business over the next two years. Its partner-centric business model, which enables partner capabilities and investments, is expected to generate a rapid acceleration of additional investment to drive the global scale and breadth of services that Cisco plans to deliver to its customers.

    MyPOV – There we go – $1 billion – so stretched through 2 years and we will have to see how much partner investment it will trigger.

    The company plans to deliver Cisco Cloud Services with and through Cisco partners. The following organizations, which are either planning to deliver Cisco Cloud Services or have endorsed Cisco’s global Intercloud initiative, represent a sampling of the kinds of global partners Cisco expects to work with to build its cloud business: leading Australian service provider Telstra; Canadian business communications provider Allstream; European cloud company Canopy, an Atos company; cloud services aggregator, provider and wholesale technology distributor Ingram Micro Inc.; global IT and managed services provider Logicalis Group; global provider of enterprise software platforms for business intelligence, mobile intelligence, and network applications MicroStrategy, Inc.; enterprise data center IT solutions provider OnX Managed Services; information availability services provider SunGard Availability Services; and leading global IT, consulting and outsourcing company Wipro Ltd.

    MyPOV – It’s a first to launch a cloud service with so many partners. But it looks more of a collection of the weak than the strong. Orchestration will be a predictable challenge. Let’s measure what these partners will put up in investment in the coming quarters though, let’s give Cisco and them the benefit of the doubt. The biggest problem – none of the partners mentioned – with the exception of MicroStrategy may bring significant work load with them – and that’s what cloud get cost effective with. So where will the load come from?

    […]
    The Cisco OpenStack-enabled Intercloud is designed to allow organizations and users to combine and move workloads – including data and applications – across different public or private clouds as needed, easily and securely, while maintaining associated network and security policies. It will also utilize Cisco Application Centric Infrastructure (ACI) to optimize application performance and to make rolling out new services much faster. Cisco will improve application security, compliance, auditing and mobility by using ACI’s centralized, programmable security policy to enable fine-grained control and isolation at scale; suitable for private and public cloud environments.

    MyPOV – No surprise – this will be another OpenStack powered cloud, with the promise to combine workloads that we need to see delivered first in the real world before we fully believe it. No surprise ACI is being used – the question is how much work will partners and customers have to do adopt ACI and how willing will they do that – given other public clouds may not ask them to do that step. And building an ACI compliant application / load may hinder its transport to other OpenStack clouds – though hopefully not the Intercloud operating partners.

    […]
    Cisco Cloud Services expand on the Cisco industry-leading cloud portfolio, which already includes SaaS offerings, such as WebEx®, Meraki® and Cisco Cloud Web Security; differentiated cloud services, such as hosted collaboration and cloud DVR; and technologies and services to build public and private clouds, such as the Cisco Unified Computing System™ (Cisco UCS®), integrated infrastructure solutions such as VCE Vblock™ Systems and NetApp FlexPod, and Cisco Application Centric Infrastructure (ACI).

    MyPOV – Cisco brings already some applications to the offering, the WebEx product probably being the most popular – but that will not be enough load – despite being the most popular load used by enterprises. Would be nice not to see the download of the WebEx applet every time a session runs with a server on a different version. And then Cisco throws in pretty much all products it can claim for the cloud, including the result of the VMware partnership around VCE Vblock and NetApp Flexpod.

    Cisco is expanding the Cisco Powered™ program to include Cisco Cloud Services. Cisco will sell these new services through channel partners and directly to end customers. Partners who develop Cisco Powered services can offer more cloud offerings faster, with lower up front development costs, and operate at cloud speed and scale. […]

    MyPOV – This is the most interesting and differentiating area with a number of unique / significant capabilities. A RedHat OpenShift based PaaS, SAP HANA to run UCS, WebEx, DaaS (Cisco, VMware and Citrix) are the most interesting one and a bunch of more technical Cisco services. Collaboration as a Service (CaaS) struck me as one of the more unusual terms – but Cisco likes to call things a little different than the rest of the industry.




    Overall POV

    Cisco certainly comes late to the game, albeit with a different angle, the Intercloud. Certainly a fair angle for the leading network provider, who fittingly also announced a whole new set of networking equipment. But all public clouds need management between their data centers and that is what Cisco is really after. It’s unclear where all the Cloud Services will run – e.g. will WebEx run in the Intercloud (operated by who) or (only) in connected partner cloud (or both). So lots of questions remain. Also existing cloud providers may go an order Cisco network gear with a little more consideration going forward.

    On the bright side, Cisco deserves credit to come out today, catch the early lead in what shapes to be a key week for the cloud (Google on Tuesday, Amazon on Wednesday and Microsoft on Thursday) – so better late than never. And the services are rich and differentiating. But lots of questions on the hybrid and partner based approach to cloud remain.



     

    New C-Suite Tech Optimization Innovation & Product-led Growth Data to Decisions Future of Work Digital Safety, Privacy & Cybersecurity softlayer cisco systems SAP vmware Google IBM amazon Oracle SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer

    Event Report - BI2014 and HANA2014 takeaways – It’s all about HANA & Lumira - but is that enough?

    Event Report - BI2014 and HANA2014 takeaways – It’s all about HANA & Lumira - but is that enough?

    I had the opportunity to attend the keynote presentation at the SAP BI2014 / HANA2014 conference in Orlando, organized by Wispubs. This used to the BusinessObject event and has slowly morphed into a SAP BI event – and this year – nor surprisingly – a HANA event. With 1800 guests attending it is a key event for the SAP BI community.

     

     

    So here are my top 3 takeaways from the keynote:

     

     

     

     

     

    • The disruption message has arrived and (of course) HANA solves it – The red thread of Steve Lucas keynote was all around business disruption triggered by technology. And of course how SAP technology helps companies to be a disruptor and how that on the flipside can help them to be disrupted. But then I am sure SAP will also be happy to help enterprises that have been disrupted – granted they can still come up with the payment for the new software technology.

      Lucas laid out how companies like Uber, WhatsApp and AirBnB have disrupted conventional businesses and achieve amazing valuations. Few observers may have noticed that WhatsApp has disrupted SAP’s Sybase 365 customers on the messaging side – but kudos for the openness and WhatsApp is certainly a poster child story not to be missed. Unfortunately the disruptive element came short in the customer stories presented in the keynote – with New South Wales Police and Fire, Velux and SpiritAero. We asked the same question in the analyst Q&A and Amit Sinha came back with a valid example of Italian tire manufacturer Pirelli selling tire usage data with the help of HANA. Definitively innovative and potentially disruptive for the competition.

    The SAP HANA Platform

    •  Lumira looms – The not so brilliantly named SAP BI product is making good progress – being used in two of the keynote demos. First to visualize the brackets of the current NCAA basketball tournament, a good actual high involvement example, that of course only jelled well with a North American audience aware of the tournament. We also saw new infographic capability in Lumira, which is a nice addition of functionality to enable storytelling. It’s a first version –e .g. we missed annotations – but a promising start. Now we can only hope SuccessFactors and Lumira developers will speak and cooperate and use common assets of Lumira Storytelling and SuccessFactors Presentations functionality. The combination is a high potential solution.

      And Lumira is becoming more and more the replacement and go to product for older, former Business Objects products – as new functionality (e.g. Design Studio) is being built here – replacing e.g. the still popular Xcelsius.

     

    Lucas and colleagues in the midst of IoT demo

     

    • HANA dominates – As expected – it is virtually impossible to get a new product from SAP or to build an innovative solution – without getting to use HANA. And as Lucas shared, this is to a certain point by design. If you want to build a mobile solution, well in the backend you will have HANA – like t or not. This certainly makes sense for SAP from a sales perspective – and even from a technology re-use perspective – but not all use cases of innovative applications require an in memory database. Just think of Hadoop based BigData scenarios. Mobile apps extending legacy. Social apps (probably Jam is HANA free at this point). Etc. SAP needs to be careful not to limit growth of some of its technology products for the sake of HANA integration.

     

    SAP did a good job showing an Internet of Things (IoT) demo – tying together huge data volumes with personalization and predictive delivery and maintenance. Nice showcase.

     

    MyPOV

    A good start to the BI2014 / HANA2014 event that confirms HANA’s pivotal role. Lumira is getting better and I expect it to soon replace all former BO products, not that SAP is saying that officially any time soon. The general concerns I have around HANA (elasticity, programming language) are not addressed, but Sapphire is the event for that, not BI2014. It looks like the SAP technology products (+/- 50% of the SAP license revenue) are doing well. My concern is that SAP did not bring the full platform package – the HANA Cloud Platform (HCP) to this event – but enterprises want to build rich analytical applications. A missed opportunity. And not surprisingly – the HANA vs Hadoop relationship remains in the field of unknown forces avoiding each other.

    I am still onsite for another 24 hours and will follow up with another post around more briefings and meetings setup here at BI2014 / HANA2014.

     

    Tech Optimization Data to Decisions Digital Safety, Privacy & Cybersecurity Innovation & Product-led Growth Future of Work SAP Hadoop SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing finance Healthcare Customer Service Content Management Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer