Results

Another week and IBM investes another billion - this week it is (a) BlueMix / PaaS

Another week and IBM investes another billion - this week it is (a) BlueMix / PaaS

Over  11,000 attendees convened at the MGM Grand in Las Vegas for the annual IBM Pulse event.  The event focused heavily on cloud and mobile and served as the backdrop to unveil BlueMIx, Big Blue’s PaaS platform for year’s to come.  Adding to the seriousness of the announcement, IBM, made a $1B commitment to BlueMix.  Side note: We have learned from recent announcements around SoftLayer data centers and Watson that when IBM is serious about something, it makes a one billion dollar investment.

Conversations with attendees indicate a very energized and excited crowd.  Concurrently, IBM organized a developer conference, trendily called Dev@Pulse

BlueMix - lot's of Blue on a Greenish platform

BlueMix is IBM's PaaS product for the years to come. The project started over two years ago, was shepherded under veteran Danny Sabbah and must have gone through some pretty substantial iterations. Remember two years ago IBM had not invested into Pivotal - so there must have been a (true) blue ambition at some point. In fact, the future will tell what led to the investment in April 2013 (more here).  The shift was a change for the better since it committed IBM into a community of multiple vendors which this week led into a new oversight committee for Pivotal. 

 

IBM Infographic on the 1B investment.

 

So under the hood BlueMix runs on Pivotal's Cloud Foundry (hence the green - or what color is Pivotal's branding?) - but IBM enriches it in many ways:

  • Attractive user interface - IBM has done a good job creating an attractive user interface to run and tie applications on BlueMix. Of course you can go back to command line, but interestingly IBM shared that developers want to be more productive, too - so we will see how much mileage the overlaying user interface will get. 

  • 'Any' programming language - IBM devised BlueMix to be open and for developers to bring their code and 'just' deploy it. And that's a great design ambition but of course hard to do in the real world - so today IBM can do Java, Javascript, PHP, PERL and Ruby (not all supported by its own IDEs though). 

  • Patterns prominently loom - IBM's pattern technology, originally devised with WebSphere - is getting good usage as part of BlueMix and is probably key to make the magic in the background work. Environments get more complex by the day - so the automation tool and it's capability are key for BlueMix's success. 

  • A rich platform - IBM keeps exposing services and tools to BlueMix - most prominent right now are the Xtify push services. MongoDB was there and of course Cloudant (freshly announced acquisition) now, too. The extensibility here is key for IBM to augment BlueMix in the coming quarters, Watson services will be the clearly differentiating crown jewels. 

  • Of course, SoftLayer - Almost needless to say IBM will run BlueMix on SoftLayer technology. Probably a good move to cater to a security sensitive audience that is re-assured by SoftLayer's bare metal capabilities, extended localization and transparency down to the machine level.

 

Crosby with 'born on the web' SoftLayer customers

Softlayer is the go forward answer for all things cloudy

Not surprisingly IBM keeps strengthening its SoftLayer commitment. Reading between the lines it is clear that the SoftLayer x86 legacy cannot support the Power based Watson plans - so not surprisingly IBM is bringing SoftLayer to run Power based systems. There was some confusion what's being brought to who - SoftLayer to Power or vice versa - but that is all good for a company like IBM finding and charting its course to cloud.

Likewise IBM will invest more into the DevOps visibility and capability of SoftLayer resources. SoftLayer had more of a run-time DNA in the past for their clients, so bringing the additional flexibility to run more and better development cycles with SoftLayer only augments the platform.

 

LeBlanc introduces BlueMix

BlueMix Moves Beyond the Big Blue Legacy

There are a few takeaways that this is not our father's IBM: 

  • CloudFoundry - IBM used to build it all themselves - now it shares a foundation with co-opetitors like HP, SAP, VMware, EMC etc - an interesting aspect - that we e.g. also see in the open cloud arena with OpenStack.
     

  • Standards based - IBM has always been a promoter of standards, but was also large enough to create standards by its sheer size. Taking outside standards and building out on them is the strategy for sure now.
     

  • Developer focused - IBM was always good at building software with the user in mind. But the developer focus is certainly new and it was impressive to see to what length IBM has gone to understand the (shifting) needs of its developer base.
     

  • Openness - Again IBM has always been a supporter of open. But it was often more on paper and marketing than in every day IT life. It's good to see the openness moving way beyond the marketing messaging and into the product DNA.
     

  • Cloud First - This is probably the first product that IBM provides for developers - on the cloud first and for now - only. Good proof for IBM acknowledging the shift in the approach to tackle future revenue streams. 

Coding on stage - pair programming with Lawson and LeBlanc

 

The bigger picture - 21st century enterprise applications

At the end of the day the question is - what will 21st century enterprise applications look like, how are they build and on what platforms do they run. We know the traditional ERP suites of the late 20th century are not the answer for a digital economy. We also know that building and managing software is getting more and more complex and more automation to handle this complexity coupled with less lock-in are promising directions for the future of building applications. Which leaves us to the only thing we know - that the 21st century apps will run on virtualized environments in the hybrid multi-cloud.

More to the applications themselves - it's clear that IBM is gambling on the API economy promise - to be able to bind together the APIs of IBM SaaS properties, with other non IBM APIs on a powerful platform. We know that this platform is BlueMix, which makes it a very strategic asset for IBM’s and its customers and partners future.

Implications, Implications...

So what does it mean for ...

  • IBM customers & prospects - You should actively look at BlueMix if you have an immediate application development need. IF you are a Rational shop, JazzHub is a good way to get started but you have probably many questions for IBM. Otherwise wait how IBM will build out BlueMix over the next quarters. If you are a customer of the IBM SaaS pay attention on the roadmap and functional richness of future releases.

    Prospects should compare BlueMix with the usual suspects out there - Amazon's AWS, Microsoft's Azure, Google GCE come to mind (watch for the end of March events).

    Both customers and prospects should be careful not to overcommit until pricing and licensing are clear - which IBM has not published yet. 

  • IBM partners - IBM has put the cards down for both it's PaaS and its 21st century Apps strategy - if you want to be part of it, time to start evaluating BlueMix and chart a strategy of differentiation in the partner ecosystem. And look at the patterns - this could be a key to more efficient engagements and potentially even the birth of an IP strategy for you.

    Prospective partners who want to get a slice out of the IBM ecosystem have now a chance to jump in the mix - it's pretty much year 0 for the API economy. Good time to start.
     

  • IBM - IBM needs to keep adding and building more services and APIs into BlueMix. Early references, success stories will be key to show impact and get the large - and mostly conservative - IBM install base to move to BlueMix faster. Obviously, get the pricing right, easier said than done. Look into exposing more products that were features at IBMConnect as a differentiator to other cloud PaaS out there. Publish a road map both for the SoftLayer expansion to Power as well as the addition of APIs from the SaaS portfolio to BlueMix.
     

  • IBM competitors - Take a look at the broader perspective in what IBM is trying to do. If you compete in PaaS with IBM - you need a API strategy. If you compete on IaaS - you already know what the bare metal threat is for your business, so position and strategize accordingly. If you compete on SaaS - decide what the future for your 21st century apps architecture is. 

MyPOV

For the longest time the ultimate application strategy for IBM has not been clear. Acquisitions seemed to be opportunistic and all over the map - at least from the outside. But maybe it was always the plan to move to the API economy - ultimately. Only Steve Mills will know. And interestingly Mills spoke about being able to do things better and a greater scale than ever before (see interview here). 

BlueMix is the key asset as a platform to get IBM to the API economy, from a PaaS perspective and SoftLayer takes care of the infrastructure.

In spring 2014 IBM's future looks remarkably better than 12 months ago.

Here are some Storify Tweet Collections of the keynotes of Day 1, Day 2 and Day 3.

Tech Optimization Innovation & Product-led Growth Data to Decisions Future of Work New C-Suite softlayer Google IBM amazon Oracle Microsoft PaaS SaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer

Six Trends Influencing Digital Business Disruption in 2014 - Webinar

Six Trends Influencing Digital Business Disruption in 2014 - Webinar

Webinar introducing Constellation's PESTEL futurist framework. Find out how six trends: political, economic, societal, technological, environmental, and legislative will affect digital business disruption in 2014. Slides.

Data to Decisions Future of Work Marketing Transformation Matrix Commerce New C-Suite Next-Generation Customer Experience Tech Optimization Chief Customer Officer Chief Executive Officer Chief Financial Officer Chief Information Officer Chief Marketing Officer Chief People Officer Chief Procurement Officer Chief Supply Chain Officer On <iframe width="420" height="315" src="//www.youtube.com/embed/Uorkv5RHyy4" frameborder="0" allowfullscreen></iframe>
Media Name: screenshotcr.png

First take - Why Workday acquired Identified - (real) analytics matter

First take - Why Workday acquired Identified - (real) analytics matter

In it's earnings press release Workday of February 26th, the company disclosed the acquisition of Identified, a San Francisco based start-up that had specialized on recruitment success using analytical algorithms.

A recap of Identify

The company has tremendous talent and started with condensing social network information into tangible (and if you will actionable) candidate profiles, compiled out of available information from social networks. With that Identify reverses the classic recruiting process - that expects candidates to come to the recruiting company - into a mini headhunting process - where the recruiter can start actively looking for candidates.

Needless to say the collection, identification and creation of a candidate profile are no trivial task and Identify spend significant time on this, creating their patent pending SYMAN process. The recent demos of the product then showed the search over these profiles. Back in January we missed some analytical tools beyond the candidate profile that would help the recruiter focus on the right candidates without going through the overall result list, e.g. a scoring model to serve the best candidates to the recruiter. Good news was - this was on Identified's roadmap.

 

Identify Screenshot from website.

 

But then it looks like Identified seemed to have run out of steam on the business side. And probably needed more capital to build engagement and interaction functionality for recruiters, more ATS functionality and also had to cater to the need for mobile support for recruiters. The com had taken 22.5M in funding, Workday disclosed the acquisition was 15M - not a good exit unfortunately. 

 

Why Identify for Workday?

Workday is currently building its recruiting solution, expected in the first half of 2014, but its first release will miss sourcing functionality. My expectation is that Workday will do a replay of the playbook we have seen with 3rd party compensation data. Use it's new BigData Analytics product capabilities to either look at social network data and / or the classic sourcing providers and put it onto it's AWS based storage for BigData Analytics. Then use the SYMAN algorithms to collect and condense to a candidate profile and serve these to the users of its new recruiting product.

This has a number of pretty intriguing consequences:

 

  • Innovative Recruiting - Workday will be able to serve the best external candidates into a recruiting scenario. Ideally not even to the recruiter, but to the manager opening the requisition.

  • Real Analytics at work - The condensation and serving of the best candidate requires some real analytics (the one that take an action or suggest something). Identified can provide them - if the can find the way to serve the best candidates.

  • Workday BigData Analytics at work - Many observers and me have given Workday a hard time on the current BigData Analytics offering to be neither of those. If Workday does what Identify enables them to do - this would really be BigData (structured and unstructured data with the 3Vs - Identify demos claimed to run on 1B+ candidates) and Analytics (condensation, scoring).

 

myPOV

If all that happens (and it seems likely to me) Workday will have a very good value proposition for its recruiting product and deserves kudos for bringing some very innovative business process into recruiting, differentiating it from rest of the vendors. I have expressed my concern that Workday wasn't showing enough thought leadership in it's position as a HCM SaaS leader in the recruiting area, but this may well change now. The next months and briefings will tell. 
 
Good for Workday customers. Good for Workday talent - an acqhire. Not so good for Identified investors.
 
Future of Work Tech Optimization Innovation & Product-led Growth New C-Suite Data to Decisions Next-Generation Customer Experience Marketing Transformation Digital Safety, Privacy & Cybersecurity Oracle SuccessFactors workday SAP AI Analytics Automation CX EX Employee Experience HCM Machine Learning ML SaaS PaaS Cloud Digital Transformation Enterprise Software Enterprise IT Leadership HR Chief People Officer Chief Information Officer Chief Technology Officer Chief Customer Officer Chief Human Resources Officer Chief Information Security Officer Chief Data Officer

3 Ways to Maximize Social Media at SharePoint Conference #spc14

3 Ways to Maximize Social Media at SharePoint Conference #spc14

1

1779359_765927533437124_1217447594_n

If you live and work in the world of SharePoint, next week is the SuperBowl event – over 10,000 people will be attending SharePoint Conference 2014. It’s great to see a fantastic lineup of sessions, speakers and activities. In fact, I would love for you to join me in these activities that I’ll be a part of:

Monday Mar 3

10:30am – 11:00am
AvePoint Booth Book Signing: SharePoint for Project Management 2010

12:40pm-1:10pm
Nintex Booth Presentation: How to Inspire, Thrive and Drive Purposeful Collaboration

3:45pm – 5:00pm
SPC100 Session: Beyond Deployment: How Can IT Inspire, Motivate and Drive Sustainable Adoption

9:00pm – 12:00am
AvePoint Red Party

Tuesday, Mar 4

1:45pm – 3:00pm
SPC209 Session – Winning User Adoption Strategies from Best Buy, Nationwide Insurance and Trek Bikes

5:00pm – 6:15pm
SPC276 Session – Lead the BYOD Revolution: Effectively Manage a Multi-Device & Multi-Generational Workforce

7:00pm – 10:00pm
SPC14 Evening Event: Las Vegas Motor Speedway

 

I’m also very excited to see that the buildup has started and community engagement is brewing thanks to social channels like Twitter, Facebook and Yammer. Make sure you follow @SPConf and #SPC14 on Twitter to be updated on the latest and greatest buzz. As the excitement is heating up and you’re preparing for this amazing event, I’d like to share 3 tips on how you can maximize social media at #SPC14:

1. Schedule Twitter, Facebook, LinkedIn and Yammer posts

If you are part of the SPC14 event team, a speaker or sponsor,  I highly recommend you use scheduling tools to post regular reminders, announcements, activities, and session schedules. This way, you don’t have to worry about posting it manually. I use SocialOomph, IFTTT and Buffer to schedule my posts.

2. Be a trusted friend and not a used car salesman

It breaks my heart to see people/companies coming out of the woodworks and only post on social media whenever there are major events like this. The last time they tweeted was at the last Microsoft event. Even worse, it’s all about them – “Come to our booth to see how fantastic we are”. Postings like these are perceived as noise and no different from a used car salesman pitching.

I highly encourage y’all to be sincere when engaging in social. While it’s okay to talk about your service or product, it’s shouldn’t just be all about that. For example, you can share key points during sessions, maybe ask other people’s opinion or connect people with relevant subject matter expertise.

3. Utilize social to interact with audience during a session

I love to use social media to engage the audience when I am presenting. I would schedule tweets, embed tweets in powerpoint and even allow my session attendees to vote using twitter to show the results realtime on the screen. Check out this tool from Tim Elliott that lets you do this.

Any other tips you care to share?  I look forward to seeing you next week and please connect with  me on twitter!

New C-Suite Future of Work Tech Optimization Innovation & Product-led Growth SharePoint Microsoft AR Chief People Officer Chief Marketing Officer Chief Customer Officer

HootSuite Launches Managed Social Media Security & Compliance Service for Enterprises

HootSuite Launches Managed Social Media Security & Compliance Service for Enterprises

HootSuite Managed Security & Compliance Services educate and empower enterprises to protect their brand and reputation

To help enterprises protect their brands as they scale social media across their organization, HootSuite has announced the launch of its Managed Security and Compliance Services. This new offering provides organizations with security and compliance tools to protect their brand and social media assets from internal and external threats. This is critical that organizations figure out social compliance. It’s one thing to focus on engagement of your customers. It’s another to make sure that ALL the engagement is of the type of engagement is “on brand” and keeps the brand in integrity.

“While businesses have adopted social media in record numbers, the lack of governance puts enterprises at risk for online security and compliance breaches,” says Greg Gunn, VP Alliances, Business Development, and Platform at HootSuite. “Our new managed services are designed to equip the enterprise with the right tools and response plans so they can safely use social media to grow their business while protecting their brand.”

HootSuite’s Managed Security and Compliance Services creates a secure, complete environment for policy management and enforcement across an enterprise’s entire organization, protecting its brand and data against a full spectrum of security and compliance threats. This is even more important for organizations within regulated industries such as finance, healthcare and insurance, as well as those that are publically traded and require persistent monitoring of their brand reputation.

Who’s using it? Check out ING DIRECT. “The nature of social media means that your brand’s reputation is often at risk,” says Jaime Stein, Senior Manager of Social Media at ING DIRECT. “Therefore, it’s important to have your team aligned and prepared to handle any situation that could compromise customer service or brand reputation. HootSuite’s Situational Simulation prepared our communications team for our recent company re-naming launch so that when it took place, our team was able to handle the response from our clients and the public effectively and efficiently.”

HootSuite’s new Managed Security & Compliance Services features include:

  • Social Media Asset Audit: Identify and govern both authorized and unauthorized social media profiles. The audit creates an Access Control Map, which allows administrators to outline legitimate profiles and determine which users can post to those profiles. We also help to remove illegitimate profiles to restore brand reputation.
  • Situational Simulations: Create a response plan for managing situations that could cause a drastic spike in activity volume on Twitter or Facebook. The exercise then engages users across the organization to react to a social media surge in a controlled environment using the HootSuite dashboard. Upon completion, HootSuite provides an executive summary and recommendations on ways to enhance the process and strategy.
  • Social Media Profile Monitoring: Monitor company-owned social media profiles for any security and compliance breaches. This ongoing service provides custom notifications when changes are made to account profiles to provide prompt alert if a hijacking occurs, and real-time content moderation to ensure all content conforms with predetermined compliance policies.

To learn more about managing risk in the enterprise, sign up now for HootSuite’s live webinar, “Managing Risk In a Social Organization: What Every CIO Needs to Know,” on Tuesday, March 11, 2014 at 11:00 a.m. EST. The webinar will feature insights from Nick Hayes, Security and Risk Analyst at Forrester Research; Lissette Santana, Branding and Stakeholder Communications Manager at PPL Electricity; and Sharad Mohan, Director of Customer Success at HootSuite.

More information:

 

Enhanced by Zemanta

Share

Marketing Transformation Next-Generation Customer Experience Revenue & Growth Effectiveness Future of Work Innovation & Product-led Growth New C-Suite Digital Safety, Privacy & Cybersecurity Marketing B2B B2C CX Customer Experience EX Employee Experience AI ML Generative AI Analytics Automation Cloud Digital Transformation Disruptive Technology Growth eCommerce Enterprise Software Next Gen Apps Social Customer Service Content Management Collaboration Chief Marketing Officer Chief People Officer Chief Revenue Officer Chief Customer Officer Chief Human Resources Officer

Avoid SharePoint eDiscovery Headaches Before They Begin

Avoid SharePoint eDiscovery Headaches Before They Begin

1

Sharepoint no colorLast week, this blog featured highlights from the very successful SharePoint eDiscovery webinar with Reed Smith.  It is a topic that came up time and again in my tenure as an analyst; companies and government Agencies are becoming tightly bound to SharePoint and want to be sure that eDiscovery ordeals can be avoided.

Microsoft SharePoint continues to gain widespread traction in organizations large and small.  A recent Forrester Research survey found that 66% of respondents will deploy SharePoint server in the next 12 months (source: Forrester Research August 2013 Global SharePoint Usage online survey).  The attraction to SharePoint is obvious – the system creates business benefits: enabling collaboration in efficient ways; providing ways to track versions of documents edited by multiple parties; allowing non-technical business people to apply basic workflow to content-driven processes, and providing faster access to information (via search and integration with the MS Office suite of apps).

For the value it creates, SharePoint can cause eDiscovery and information governance headaches if proper planning does not take place.  It is all too easy to assume that if information is searchable, eDiscovery will be no problem when the time comes.  But, as is often the case in life, the devil is in the details.  Because SharePoint allows users to add value to content (e.g. adding workflow tasks), there is the factor of metadata to consider.

In eDiscovery, metadata is a critical component of ESI that can wreak havoc on collection efforts.  When we talk about metadata for native ESI, we are usually concerned about the Operating System (OS) fields that are kept in the File Allocation Table (FAT). Different OS formats support a wide variety of fields such as different dates, attributes, permissions and file name formats (long vs. short). These fields are not usually stored within the actual file and so are very vulnerable to alteration or complete loss when items are read or copied. Forensic collection is focused on preserving this ‘envelope’ information so that evidence can be authenticated and the context reconstructed in court. That is only half of the metadata story. Microsoft Office and other programs retain non-displayed information within the header and body of all common file types, especially with the adoption of the XML based Office 2007 file formats.

This metadata issue is only extended with SharePoint because most collection efforts are focused only on grabbing SharePoint document libraries (as they are stored on file systems).  But SharePoint is so much more than document libraries (if it weren’t, it would simply be another file share sitting out there).

A defensible SharePoint collection solution will be able to capture document libraries, metadata, and truly enable contextual preservation.  When considering a SharePoint collection tool be sure that it:

  • Allows for incremental preservation, monitoring changes, identifying and preserving different document versions, and incrementally preserve to multiple matters over time
  • Maps to custodian access and prevent over-preservation, collecting and preserving only what is relevant to a particular custodian, not the entire SharePoint site
  • Is deployable in the same fashion as SharePoint, which tends to be highly decentralized and siloed; thus the collection solution should be deployable on-demand, directly to SharePoint sites through a highly automated, remote installation, with intuitive administration and operation performed through a standard web browser

At the end of the day, informed customers will make sure that the collection tool can get more than just SharePoint document libraries, but all metadata, as well.  Also, look for solutions that will not impact the production environment too heavily; you don’t want to bring SharePoint to its knees when it is a valuable business application.  And finally, get legal and IT together on the same page about how to reasonably prove that your SharePoint preservation and collection methodologies and tools are defensible.


New C-Suite Innovation & Product-led Growth SharePoint Microsoft AR Chief Customer Officer Chief Executive Officer Chief People Officer

Money from Nothing

Money from Nothing

1

Last week I wrote about Bitcoin and cryptocurrencies. Thursday night in New York I stumbled into another innovation in the medium of exchange:

This arrived with my check at the excellent Toshie’s Living Room in the Flatiron Hotel. Hardly a surprising development, but provocative.

I know you’re waiting for me to say this increases the liquidity of your social assets, but I’m not going there.

Consider the next phases of this market:

·  The frank and universal incentive (as opposed to asking your limited number of friends to review you) will further devalue the currency of Yelp approval, or any other social review site.

·  People wishing to monetize their status as Senior Contributor or Mega Maven or whatever should be able to trade-up to the top shelf by showing their status. Finally — value for your Klout score!

·  Would Toshie pour you a second shot to add a review on, say, Timout New York? You could drink all night, going from one club to the next and imbibing in exchange for reviews on all your social media.

More seriously, how does the cost per impression of this medium compare with, say, advertising on taxi screens?  It seems pretty economical, and reasonably well targeted, and builds some goodwill with the customer you already have. So I’d expect to see a lot more of it.

Postscript: earlier in the evening, in exchange for my applause, Melanie Marod, the also excellent singer, gave me a CD of her music. 

To paraphrase Dire Straits, it was an evening of “music for nothin’ and my drinks for free.”  This what happens when Andy Warhol’s 1968 prophecy — “in the future, everyone will be world-famous for fifteen minutes” — meet’s Ray Kurzweil”s from 2001: “We’re doubling the rate of progress every decade.”

Now, everyone’s a nano-celebrity. – CAM

Posted by Chris Meyer on February 26, 2014

New C-Suite Next-Generation Customer Experience Innovation & Product-led Growth Matrix Commerce Digital Safety, Privacy & Cybersecurity Data to Decisions Chief Customer Officer Chief Executive Officer Chief Information Officer

gotofail and a defence of purists

gotofail and a defence of purists

The widely publicised and very serious "gotofail" bug in iOS7 took me back ...

Early in my career I spent seven years in a very special software development environment. I didn't know it at the time, but this experience set the scene for much of my understanding of information security two decades later. I was in a team with a rigorous software development lifecycle; we attained ISO 9001 certification way back in 1992. My company deployed 30 software engineers in product development, 10 of whom were dedicated to testing. Other programmers elsewhere independently wrote manufacture test systems. We spent a lot of time researching leading edge development methodologies, such as Cleanroom, and formal specification languages like Z.

We wrote our own real time multi-tasking operating system; we even wrote our own C compiler and device drivers! Literally every single bit of the executable code was under our control. "Anal" doesn't ever begin to describe our corporate culture.

Why all the fuss? Because at Telectronics Pacing Systems, over 1985-1989, we wrote the code for the world's first software controlled implantable defibrillator, the Guardian 4210.

The team spent relatively little time actually coding; we were mostly occupied writing and reviewing documents. And then there were the code inspections. We walked through pseudo-code during spec reviews, and source code during unit validation. And before finally shipping the product, we inspected the entire 40,000 lines of source code. That exercise took a five person team working five hours a day for two months.

For critical modules, like the kernel and error correction routines, we walked through the compiled assembly code. We took the time to simulate the step-by-step operation of the machine code using pen and paper, each team member role-playing parts of the microprocessor (Phil would pretend to be the accumulator, Lou the program counter, me the index register). By the end of it all, we had several people who knew the defib's software like the back of their hand.

And we had demonstrably the most reliable real time software ever written. After amassing several thousand implant-years, we measured a bug rate of less than one in 10,000 lines.

The implant software team had a deserved reputation as pedants. Over 25 person years, the average rate of production was one line of debugged C per team member per day. We were painstaking, perfectionist, purist. And argumentative! Some of our debates were excruciating to behold. We fought over definitions of "verification" and "validation"; we disputed algorithms and test tools, languages and coding standards. We were even precious about code layout.

Yet 20 years later, purists are looking good.

Last week saw widespread attention to a bug in Apple's iOS operating system which rendered a huge proportion of website security impotent. The problem arose from a single superfluous line of code - an extra goto statement - that nullified checking of SSL connections, leaving users totally vulnerable to fake websites. The Twitterverse nicknamed the flaw #gotofail.

There are all sorts of interesting quality control questions in the #gotofail experience.

  • Was the code inspected? Do companies even do code inspections these days?
  • The extra goto was said to be a recent change to the source; if that's the case, what regression testing was performed on the change?
  • How are test cases selected?
  • For something as important as SSL, are there not test rigs with simulated rogue websites stress test security systems before release?

There seems to have been egregious shortcomings at every level: code design, code inspection, and testing.

A lot of attention is being given to the code layout. The spurious goto is indented in such a way that it appears to be part of a branch, but it is not. If curly braces were used religiously, or if an automatic indenting tool was applied, then the bug would have been more obvious (assuming that the code actually gets inspected by humans).

I agree of course that layout and coding standards are important, but there is a much more robust way to make source code clearer.  Beyond the lax testing and quality control, there is also a software-theoretic question in all this that is getting hardly any attention: Why are programmers using ANY goto statements at all?

I was taught at college and later at Telectronics that goto statements were to be avoided at all cost. Yes, on rare occasions a goto statement makes the code more compact, but with care, a program can almost always be structured to be compact in other ways. Don't programmers care anymore about elegance in logic design? Don't they make efforts to set out their code in a rigorous structured manner?

The conventional wisdom is that goto statements make source code harder to understand, harder to test and harder to maintain. Kernighan and Ritchie - UNIX pioneers and authors of the classic C programming textbook - said the goto statement is "infinitely abusable" and it "be used sparingly if at all." The Telectronics implant software coding standard prohibited goto statements, without exception.

Hard to understand, hard to test and hard to maintain is exactly what we see in the flawed iOS7 code. The critical bug never would have happened if Apple too banned the goto.
Now, I am hardly going to suggest that fanatical coding standards and intellectual rigor are sufficient to make software secure. It's unlikely that many commercial developers will be able to cost-justify exhaustive code walkthroughs when millions of lines are involved even in the humble mobile phone. It's not as if lives depend on commercial software.

Or do they?!

Let's leave aside that vexed question for now and return to fundamentals.

The #gotofail episode will become a text book example of not merely mention attention to detail, but moreover the importance of disciplined logic, rigor, elegance, and fundamental coding theory.

Yet a deeper lesson perhaps in all this is the fragility of software. Prof Arie van Deursen nicely describes the iOS7 routine as "brittle". I want to suggest that all software is tragically fragile. It takes just one line of silly code to bring security to its knees. The sheer non-linearity of software - the ability for one line of software anywhere in a hundred million lines to have unbounded impact on the rest of the system - is what separates development from conventional engineering practice. Software doesn't obey the laws of physics. No non-trivial software can ever fully tested, and we have gone too far for the software we live with to be comprehensively proof read. We have yet to build the sorts of software tools and best practice and habits that would merit the title "engineering" (See also "Security Isn't Secure).

I'd like to close with a philosophical musing that might have appealed to my old mentors at Telectronics. We have reached a sort of pinnacle in post-modernism where the real world has come to pivot precariously on pure text. It is weird and wonderful that engineers are arguing about the layout of source code - as if they are poetry critics.

We have come to depend daily on great obscure texts, drafted not by people we can truthfully call "engineers" but by a largely anarchic community we would be better of calling playwrights.

New C-Suite Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity Security Zero Trust Chief Customer Officer Chief Executive Officer Chief Information Officer Chief Information Security Officer Chief Privacy Officer

FIDO Alliance goes from strength to strength

FIDO Alliance goes from strength to strength

With a bunch of exciting new members joining up on the eve of the RSA Conference, the FIDO Alliance is going from strength to strength. And they've just published the first public review drafts of their core "universal authentication" protocols.

An update to my Constellation Research report on FIDO will be published soon. Here's a preview.

The Go-To standards alliance in protocols for modern identity management

The FIDO Alliance - for Fast IDentity Online - is a fresh, fast growing consortium of security vendors and end users working out a new suite of protocols and standards to connect authentication endpoints to services. With an unusual degree of clarity in this field, FIDO envisages simply "doing for authentication what Ethernet did for networking".

Launched in early 2013, the FIDO Alliance has already grown to nearly 100 members, amongst which are heavyweights like Google, Lenovo, MasterCard, Microsoft and PayPal as well as a couple of dozen biometrics vendors, many of the leading Identity and Access Management solutions and service providers and several global players in the smartcard supply chain.

FIDO is different. The typical hackneyed elevator pitch in Identity and Access Management promises to "fix the password crisis" - usually by changing the way business is done. Most IDAM initiatives unwittingly convert clear-cut technology problems into open-ended business transformation problems. In contrast, FIDO's mission is refreshingly clear cut: it seeks to make strong authentication interoperable between devices and servers. When users have activated FIDO-compliant endpoints, reliable fine-grained information about their client environment becomes readily discoverable by any servers, which can then make access control decisions, each according to its own security policy.

With its focus, pragmatism and critical mass, FIDO is justifiably today's go-to authentication standards effort.

In February 2014, the FIDO Alliance announced the release of its first two protocol drafts, and a clutch of new members including powerful players in financial services, the cloud and e-commerce. Constellation notes in particular the addition to the board of security leader RSA and another major payments card, Discover. And FIDO continues to strengthen its vital "Relying Party" (service provider) representation with the appearance of Aetna, Goldman Sachs, Netflix and Salesforce.com.

It's time we fixed the Authentication plumbing

In my view, the best thing about FIDO is that it is not about federated identity but instead it operates one layer down in what we call the digital identity stack. This might seem to run against the IDAM tide, but it's refreshing, and it may help the FIDO Alliance sidestep the quagmire of identity policy mapping and legal complexities. FIDO is not really about the vexed general issue of "identity" at all! Instead, it's about low level authentication protocols; that is, the plumbing.

The FIDO Alliance sets out its mission as follows:

  • Change the nature of online authentication by:
    • Developing technical specifications that define an open, scalable, interoperable set of mechanisms that reduce the reliance on passwords to authenticate users.
    • Operating industry programs to help ensure successful worldwide adoption of the Specifications.
    • Submitting mature technical Specification(s) to recognized standards development organization(s) for formal standardization.

The engineering problem underlying Federated Identity is actually pretty simple: if we want to have a choice of high-grade physical, multi-factor "keys" used to access remote services, how do we convey reliable cues to those services about the type of key being used and the individual who's said to be using it? If we can solve that problem, then service providers and Relying Parties can sort out for themselves precisely what they need to know about the users, sufficient to identify and authenticate them.

All of these leaves the 'I' in the acronym "FIDO" a little contradictory. It's such a cute name (alluding of course to the Internet dog) that it's unlikely to change. Instead, I overheard that the acronym might go the way of "KFC" where eventually it is no longer spelled out and just becomes a wood all by itself.

FIDO Alliance Board Members

  • Blackberry
  • CrucialTec (manufactures innovative user input devices for mobiles)
  • Discover Card
  • Google
  • Lenovo
  • MasterCard
  • Microsoft
  • Nok Nok Labs (a specialist authentication server software company)
  • NXP Semiconductors (a global supplier of card chips, SIMs and Secure Elements)
  • Oberthur Technologies (a multinational smartcard and mobility solutions provider)
  • PayPal
  • RSA
  • Synaptics (fingerprint biometrics)
  • Yubico (the developer of the YubiKey PKI enabled 2FA token).

FIDO Alliance Board Sponsor Level Members

  • Aetna
  • ARM
  • AGNITiO
  • Dell
  • Discretix
  • Entersekt
  • EyeLock Inc.
  • Fingerprint Cards AB
  • FingerQ
  • Goldman Sachs
  • IdentityX
  • IDEX ASA
  • Infineon
  • Kili
  • Netflix
  • Next Biometrics Group
  • Oesterreichische Staatsdruckerei GmbH
  • Ping Identity
  • SafeNet
  • Salesforce
  • SecureKey
  • Sonavation
  • STMicroelectronics
  • Wave Systems
New C-Suite Digital Safety, Privacy & Cybersecurity FIDO Security Zero Trust Chief Customer Officer Chief Information Security Officer Chief Privacy Officer