Constellation Insights

IBM researchers achieve record deep learning benchmark with GPUs: Big Blue says it has achieved a big milestone in deep learning software, through the use of a 64 IBM Power system servers containing 256 NVIDIA GPUs. The key advancement was that scale-out architecture, as most deep learning software frameworks make effective use of single systems containing multiple GPUs but don't do as well running the same jobs across multiple servers, IBM fellow Hillery Hunter said in a blog post.

IBM says the new software achieved 95 percent scaling efficiency with the Caffe deep learning framework, besting a previous record by Facebook's AI research group using Caffe2, its homegrown framework that was inspired by the original Caffe project out of UC-Berkeley:

IBM Research also beat Facebook’s time by training the model in 50 minutes, versus the 1 hour Facebook took.  Using this software, IBM Research achieved a new image recognition accuracy of 33.8% for a neural network trained on a very large data set (7.5M images). The previous record published by Microsoft demonstrated 29.8% accuracy. 

The new code is available as a technical preview in IBM's PowerAI 4.0 distribution for Caffe and TensorFlow, the latter of which is considered the most popular deep learning framework overall.

POV: Hunter's post is in-depth and well worth a read. However, there is a practical question to keep in mind when considering IBM's new benchmark, says Constellation VP and principal analyst Doug Henschen: Namely, where and how customers can take advantage of the PowerAI architecture.

That's because while IBM Power servers have long held certain performance advantages over "industry-standard" Intel x86 servers, the latter now rule the leading public clouds, Henschen says. "The race is on to bring graphical processing units into the picture, and indeed, the bottlenecks will be in the intersections between conventional software and systems and these far more powerful processors," he adds. "IBM's answer is PowerAI, and the achievement of higher efficiency is important and laudable."

It remains to be seen, however, if this will motivate leading-edge AI researchers and the businesses funding their research to move their deep learning experiments into IBM's cloud or to invest and even reinvest in IBM Power for on-premises data centers going forward, Henschen says. "The longstanding, mainstream trend has been to accept slightly lower performance and efficiency and throw more low-cost, commodity capacity at computing challenges."

Salesforce rolls out AI image-recognition tool for marketers: The CRM giant is continuing to release AI applications under the Einstein brand, with the latest being Einstein Vision for Social Studio. The tool's purpose reflects the fact that social media has become a largely visual medium over time, with the explosive rise in photo-sharing, Salesforce says:

Photos on social media represent many consumer behaviors, preferences, wants and needs that are going undetected by marketers. If a person posts an image of a new product, but doesn't include text including the product's name, it's likely that social media monitoring won't capture it.

The new application includes four image libraries, which among them contain 2 million brand logos, 200 foods, 1,000 objects and 60 scenes, Salesforce says.

POV: The technology looks like it's based on Salesforce's MetaMind acquisition and is an excellent addition to the Marketing Cloud, says Constellation VP and principal analyst Cindy Zhou. "It has been challenging for brands to track and monitor images for social sentiment, and Einstein Vision gives them a simple way to manage this in their existing Marketing Cloud Social Studio environment," Zhou says. Beyond marketing, the application's use case for customer service is strong, as brands can be more proactive in serving customers based on the images they post, she adds.

Privacy research may miss the mark: Researchers at MIT and Stanford conducted a study in which they offered a group of students a free pizza in exchange for them giving up three of their friends' email addresses. Here are two key conclusions of the report, as summarized by a Stanford news release:

The study raised two policy implications.

Since the findings show consumers’ actions don’t align with what they say, and it’s difficult to gauge a consumer’s true privacy preference, policymakers might question the value of stated preferences.

On the other hand, consumers might need more extensive privacy protections to protect consumers from themselves and their willingness to share data in exchange for relatively small monetary incentives.

POV: There are some questions worth raising about the research's conclusions, and the topic of privacy in general, says Constellation Research VP and principal analyst Steve Wilson. For one thing, there's the matter of data quality—were those email addresses real? "Leaving that aside, even if they're giving up their buddies' addresses the tone of this research has some serious victim-blaming," Wilson says.

"It is wrongheaded to call this a paradox," he adds. "The human condition is all about people making bad choices. What I'd like to see here is a bit more sophistication around behavior change with respect to security."

Wilson points to social media, describing it as "a seductive environment where people are spilling their guts." Facebook has essentially gamified surveillance through features such as photo tagging, which is then coupled with facial recogition software on the back end, improving the social networks ability to track users and target ads.

"Security researchers turn around and say there's a paradox in people's behavior, but they're being suckered into bad behavior." Meanwhile, the researchers' second conclusion is half right, Wilson says: "This is not to protect consumers from themselves, it's to protect themselves from businesses. They need to be saved from these digital magnates."

Legacy watch: Homeland Security CIO out after three months: The recently appointed CIO of the U.S. Department of Homeland Security is leaving the agency after just three months on the job, ZDNet reports. Richard Staropoli had reported to former DHS head John Kelly, who is now President Donald Trump's chief of staff.

It's not clear why Staropoli is going, but in his short time on the job he has called for major reforms to the way the department's IT operations are run. He previously served as CISO and managing director of the Fortress Investment Group hedge fund, and in a June speech said he was remaking the department to reflect the faster pace of a hedge fund's operations, as Fedscoop reported:

“We’ve moved out all these deputies and all these directors from their offices located all over these different buildings that DHS occupies,” he explained. “We now occupy one floor, once space in a trading floor concept, so when I need to get something done or a vendor needs to come up and we need to address a problem, I’ve got every entity I need in one spot. That cuts down on bureaucracy and allows me the maximum benefit of maximizing my time, so we can achieve results.”

POV: Deputy CIO Stephen Rice will serve as acting CIO until a permanent appointment is made. That candidate could well be Rice, who sources tell Federal News Radio was a key player in executing on the vision Staropoli set out for the agency's IT operations.

It's not clear where the DHS ranks on the IT dysfunction scale among federal departments, but it certainly has a crucial remit, being responsible for domestic cybersecurity matters. A steady hand at the CIO wheel will be more than welcome in an era of ever-heightening malicious cyberattacks.