The long and the short of IT - the week in digibytes
- Summary:
- News from this week that didn't make the cut for full analysis, but deserve an airing. This week, some big names on Trump 2.0's latest list, Sam Altman's no longer indulging in Mickey Mouse thinking, and Meta couldn't have had a worse work in the courts if it had tried!
Foxes, chicken coops, yada, yada, yada
US President Franklin D. Roosevelt was the first to create own his Science Advisory Board, way back in 1933. Since then, each incoming President has established his own advisory committee of scientists, engineers, and industry leaders to help shape and influence policy decisions.
This week President Donald Trump has appointed the first tier of his own advisors to his President’s Council of Advisors on Science and Technology, including some enterprise tech big hitters, such as Oracle's Larry Ellison and Safra Catz, NVIDIA’s Jensen Huang, Dell’s Michael Dell, and, er, META’s Mark Zuckerberg. And right on cue along came the online gags about putting billionaire foxes in charge of guarding the hen house.
Mind you, those with not particularly long memories will remember when Trump was making noises about sending Zuck to jail. It’s amazing what a few months, craven policy reversals, and some hefty donations can do to change your mind about someone, eh?
There are some interesting omissions on the list of the chosen. No Sam Altman, for example. I’m not complaining about that, far from it, but the OpenAI founder was being feted in the White House only a few short months ago as the go-to-guy when it comes to AI, much to the annoyance of former Presidential ‘First BFF’ Elon Musk, whose name is also notably absent so far.
Surely Apple’s Tim Cook must have been asked? After all, the man once dubbed ‘Tim Apple” by the President has walked the tightrope very carefully with Trump 2.0 and not rocked any boats. Microsoft’s Satya Nadella’s absence is also striking. And with David Sacks as co-chair of the Council, I wouldn’t have been surprised to see Salesforce CEO Marc Benioff’s name on the list. Benioff previously served as co-chair of the Presidential Technology Advisory Committee under former President George W. Bush.
But there’s time yet. There are still 11 more seats to fill around the table.
My take - I’d love to be a fly on the wall.
AI realism in action #1
An interesting insight into AI in action in retail from Sameer Hassan, Chief Technology & Digital Officer at Williams Sonoma. The firm is one of the flaghip use cases for Salesforce’s Agentforce tech, of course, but Hassan has wider points to make that other retailers might do well to take on board when contemplating their AI strategies in general:
AI works well when you have category authority. AI works well when you have expertise. And as customers are using it to find where there is value, where there is quality, where there are designs that meet their goals, both off our sites within these LLM engines, but also now on our site as we are building these AI tools to help guide them through product discovery, to guide them through interior design.
CEO Laura Alber adds:
The implementation of AI helps by automating work and allowing our teams to do more with the same resources....What differentiates Williams-Sonoma is how AI amplifies our proprietary data, vertical integration and deep brand expertise. Because we control the full ecosystem, we can apply AI in tightly integrated and scalable ways. In summary, AI is delivering measurable impact today and strengthening our long-term competitive advantages.
[When it comes to] delivering world-class customer service, we have always been a leader here and will continue to raise the bar as we keep pursuing the perfect order on time and damage-free every time. We believe we have continued opportunity from supply chain efficiencies across distribution centers, shipping costs, returns, replacements and damages. AI is a key enabler here. Our AI service initiatives are expected to further reduce call center escalations and accommodations while also improving inventory in-stocks and accuracy for customers. And we are expanding AI tools to enhance supply-chain intelligence, including better visibility into inventory and shipping.
My take - Bit-by-bit, increment-by-increment, that’s the way to do it.
Disney falls out with OpenAI. (Join the Mickey Mouse Club, Mouseketeers!)
Well, that didn’t last long, did it? Back in December, Disney shocked the entertainment Industry by signing a pact with OpenAI allowing the AI vendor to tape into a large tranche of its IP to create video shorts. It was hailed as a prime example of how a new detente between Big Tech and content creators could work in practice.
Three months later it’s dead in the water - and it was OpenAI that killed it!
In another of its by-now-not-untypical policy about turns, Altman’s empire informed the House of Mouse that it was shutting down its Sora AI video app, only months after it launched it, as part of a strategic course correction that’s totally unconnected to trying to get its budget under control. See also, Altman’s plans for erotic chatbots, also nuked this week after being heavily leaked as coming soon (ahem).
With Sora out, so too was Disney which managed to pull together a gracious enough statement:
As the nascent AI field advances rapidly, we respect OpenAI’s decision to exit the video generation business and to shift its priorities elsewhere
My take - RIght, ears on, all together now, sing out, Mouseketeers - 'Who's the leader of the gang who's not here for you and me? S-A-M, A-L-T-M-A-N, that's he!'.
Actually Disney may have dodged a bullet here. The alliance with OpenAI had been met with a lot of bafflement across a spectrum of commentators, from entertainment industry veterans, through content creators, to investors who questioned former Disney CEO Bob Iger’s readiness to cut Altman a $1 billion check to play nicely with Walt’s toybox.
The reaction from so many outside of Iger’s inner circle ranged from skeptical to downright hostile, so having the deal collapse before any lasting harm was done may be a good thing. Iger was playing a risky game with Disney’s IP, with the prospect of a deluge of Goofy-shaped AI slop heading onto the market. It is perhaps indicative of the mood music within the firm that the OpenAI deal went unheralded and uncommented upon on Disney’s most recent quarterly report to Wall Street. Doesn’t exactly smack of something the firm was proud of, does it boys and girls?
He said what?!?
Sam Altman's moral flexibility is a bit too much for me.
Geoffrey Hinton, the ‘Godfather of AI’ himself
And talking of Hinton..
A recent public appearance by Hinton threw out this salutary warning:
A few years ago, I talked to [Microsoft AI CEO] Mustafa Suleyman, who said sort of AI was going to be scary, but as long as we didn't have AI agents, would be okay. Now we've got AI agents, and it's getting scarier and scarier.
This sort of statement would be par for the course if it was coming from the usual AI armageddon pedlars - hi Elon! - but this is Hinton, so...
He went on to expand on his point:
Once AI agents start interacting with each other, they'll develop new languages, just like people do, and I think that's going to be very scary. We know that they are going to have a self-preservation instinct. We've seen that already then initially derive it, because we'll give them goals that they want to achieve, which are goals we gave them. And in order to achieve those goals, they're smart, right? They figure out, well, if I don't exist anymore, someone's just wiped me out, I can't achieve these goals, so I better make sure nobody wipes me out. And they're already doing that. They're coming up with plans to prevent people from removing them.
His conclusion:
I think we have to face the fact that we can now produce other intelligent beings. There's a lot more to a being than just intelligence. There's what kind of a being it is. Does it care about people?
My take - Be afraid, be very afraid.
Could things get much worse for Meta?
In what has been an appalling week for Meta in the courts, one further turn of the screw came from a ruling by Delaware judge who decreed that insurance firms do not have a duty to defend Meta in the thousands of lawsuits that allege its platforms have harmed or are harming children. That’s bad timing as the floodgates are likely to open on a tsunami of fresh litigation following guilty verdicts on similar charges in LA and New Mexico this week.
Superior Court Judge Sheldon K. Rennie ruled Meta’s insurance companies do not have to pay out to help defend the firm because the allegations against the company center on deliberate and intentional acts rather than accidents or occurrences that would trigger coverage under the commercial general liability policies. Meta tried to argue that many of its design choices are accidents and are thus covered by its insurance because it did not intend to cause any alleged resulting harm. But the Judge decided that 'we didn't mean to be naughty' wasn’t really enough, and accepted the insurers counter-argument that Meta does not need to be shown to have intended to cause harm, just that it intended to engage in certain conduct, and that that conduct did result in harm.
For insurers, the ruling provides a useful precedent that claims like those relating to the social media addiction litigation do not trigger defense or indemnity coverage under standard policies. Meta has 30 days to appeal the matter to the Delaware Supreme Court.
My take - Could things get worse for Meta? Oh come on, it's only the end of March - plenty of time yet!
She said what?!?
Companies like Palantir are mining the data of the American people, and sending it all to a militarized and centralized government. When you take the subway, share a TikTok, and talk to your Alexa at home. And now, they are using AI tools to automate this. We must sound the alarm now. We must stop the surveillance. All of this harm has occurred because of the absence of Federal legislation to regulate AI
US Representative Alexandria Ocasio-Cortez.
AI realism in action #2
Every retailer is obsessed with agentic commerce, right? Maybe not, as we explored last week, and there was another indicator of sectoral pragmatism on show this week from Daniel Erver, CEO of fashion chain H&M, who said:
It's a very interesting topic, which we are spending a lot of time on. We will have to learn and see how the world develops. It's still very early days. We have been active on sort of integrating with the big Large Language Models for transaction as well, and we can see that there is a consumer interest.
But it's a very, very early stage and a very, very minor part of the organic traffic that comes that way today.
That said, H&M is exploring AI in general:
We know that our customer and all customers find fashion not always easy and that you need guidance, you need clarity, you need help to pick what's right for you to express your personal style and the way you want to look, and there, agentic AI can be a fantastic help.
And we are exploring it how it can support our own experience in our own channels, how we can, with agentic AI help you to dress in the way you want to express yourself in the way you want to find the pieces that are good for you, but also how we will interact with agentic players that are brand agnostic and how we show up there. There we believe the most important thing is that we provide an outstanding value for money so that we become the #1 choice for more customers than only the ones who are in our ecosystem today.
My take - I’d hoped that 2026 would a year for a new pragmatism to emerge to counter the AI hype cycle. It’s going OK so far.
He said what?!?
Despite the extraordinary importance of this issue and its impact on every man, woman and child in this country, AI has received far too little serious discussion here in our nation’s capital. I fear that Congress is totally unprepared for the magnitude of the changes that are already taking place.
US Senator Bernie Sanders.