Results

Hasbro said in an SEC filing that it was hit by a cybersecurity attack that forced the company to take certain systems offline.

In the filing, Hasbro said it discovered unauthorized access to its network March 28. Hasbro added:

"The Company’s investigation is ongoing, and it is working diligently to resolve the matter and determine the full scope of impact. The Company has implemented and continues to implement business continuity plans to enable it to continue to take orders, ship product and conduct other key operations while it resolves this situation. The need to run these interim measures may continue for several weeks before the situation is fully resolved and may result in some delays."

Oracle has cut anywhere between 20,000 to 30,000 jobs, or 18% of its workforce, and the emails to affected employees are sprinkled throughout X and LinkedIn. The roundup of coverage is here.

The company has a heavy debt load and negative cash flow due to its AI infrastructure buildout. Oracle raised $50 billion in debt and equity for its data center buildout earlier this year.

In a nutshell, Oracle is clearly choosing GPUs over people. It remains to be seen whether AI has enabled Oracle to simply be more efficient or the company needs to cut costs anyway it can. Free cash flow on a trailing 12-month basis is horrid.

Oracle free cash flow 022826

Another argument for Oracle layoffs is that the company is simply rightsizing after the Cerner acquisition. In 2019, which I use as a primary benchmark due to the COVID hiring spree, Oracle had 136,000 employees. At end of fiscal 2025, the company had 162,000.

Blue Yonder's Supply Chain Compass report provides a sobering look at the supply chain and how leaders are adapting to frequent disruptions. In a survey of 700 supply chain pros, Blue Yonder found that 66% of leaders are ready for the future, down from 73% last year.

The No. 1 priority for supply chain leaders was improving efficiency and productivity with faster and better decision-making No. 2, up from No. 7 a year ago. Geopolitical disruptions were a big concern with only 20% of leaders being able to develop and deploy a response within 24 hours.

Blue Yonder supply chain 2026

Oracle NetSuite launched the NetSuite AI Connector Service so customers can bring their own AI assistants via Model Context Protocol (MCP). The service will connect AI assistants to NetSuite apps as well as the NetSuite Analytics Warehouse.

NetSuite said a NetSuite AI Connector Service Companion will help AI assistants understand NetSuite data for permissions, workflows and grounding. The connector service companion will also include a library of more than 100 finance-specific prompts, best practices and governance for various roles.

OpenAI is not only losing the buzz race against Anthropic it's increasingly seeing its missteps chronicled.

Today's episode of OpenAI is in a tailspin includes:

Palantir said it inked a new 5-year deal with Stellantis to support its AI and data initiatives. Palantir said Stellantis will broaden its use of Palantir Foundry and Palantir AI Platform (AIP). Stellantis has already established its data ontology in Foundry as the two companies have worked together since 2016.

Stanford published a study in Science on LLMs that found that AI affirmed users' actions 49% more often than humans on average including in cases involving deception, illegality and other harms.

The short version from researchers is here, but the paper is a better and more nuanced read.

The conclusion:

"AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences. Although affirmation may feel supportive, sycophancy can undermine users’ capacity for self-correction and responsible decision-making. Yet because it is preferred by users and drives engagement, there has been little incentive for sycophancy to diminish. Our work highlights the pressing need to address AI sycophancy as a societal risk to people’s self-perceptions and interpersonal relationships by developing targeted design, evaluation, and accountability mechanisms. Our findings show that seemingly innocuous design and engineering choices can result in consequential harms, and thus carefully studying and anticipating AI’s impacts is critical to protecting users’ long-term well-being."

Standford LLM study

A guilty pleasure of mine is reading these prompt primers that can use Anthropic Claude to turn you into Warren Buffett, the best CEO ever and the most productive person on the planet.

The latest one from Nav Toor on X may have triggered me. This one revolved around automating your entire workday with 12 prompts apparently based on consultant best practices (which by the way often lead you into a ditch).

The prompts instruct Claude to act like McKinsey, Bain, Goldman Sachs, Accenture, PwC and Deloitte with a dash of big tech culture from Amazon, Google and Salesforce. Oh and don't forget the EY "end of day shutdown routine generator."

Now I suppose if you ran these prompts you could fake like you knew what you were doing for a while with a big caveat. If you knew what the problem was and what data you needed these prompts could be handy. Most of the hard work is figuring out the problem. It's far more likely that you'd do the prompts without knowing the problem, context and data and sound something like a robot. If we all wind up sounding like a management consultant you know what happened. The new "!" and "superexcited" and " —" is going to be sounding like an executive coach.

Anyone want to give it a whirl and let me know how it goes?