What’s shakin’, AI Insiders? I’m coming at you from the snowy mountains of Colorado where I’m spending the week with family and friends hitting some of the best snow I think I’ve ever experienced! It’s crazy. But I wanted to take a break from the mountains after three consecutive days on the snowboard to talk AI with my friend Jeff Jarvis! Here’s what we discussed today, TONS of big news.
But before that, I want to throw a huge thank you to our amazing patrons at Patreon.com/AIInsideShow. Tim Epperson is our newest Patron, and Jason Neiffer has joined our excellent Executive Producers DrDew, Jeffrey Marraccini, WPVM 103.7 in Ashville, NC, Dante St James, and Bono De Rick! We have some AMAZING support, thank you!
Humane's Acquisition by HP
Humane is moving into a new home with HP. HP Inc. will acquire Humane's assets for $116 million. HP is set to gain Humane's software platform and a collection of around 300 IP/patents. Most Humane employees will be integrated into a new team at HP called HP IQ. Their focus? Integrating AI into HP's computers, printers, and connected conference rooms.
Unfortunately, this means the end of Humane's AI Pin device. In fact, AI Pins will stop working on February 28. Refunds are only available if purchased within the last 90 days, with a submission deadline of February 27. On the bright side, the device will still let you know when the battery is low, so not TOTALLY useless.
It's worth noting that Humane had sought a $750 million to $1 billion buyout last June, which seemed ridiculous at the time. The company had also shifted toward offering the CosmOS as its primary business, licensing it to other companies. This situation highlights how hardware reliant on cloud computing can become obsolete when the tide changes.
xAI's Grok3 and the "Truth-Seeking" Mission
This Monday, xAI released Grok3 with a livestream featuring Elon Musk. The release includes a Think Mode for reasoning applications and a Big Brain Mode for even more difficult problems. There's also DeepSearch, which uses the internet and X as data sources and plans for a voice mode, an enterprise API, and open-sourcing Grok2. However, this awesomeness comes with a price hike for Premium+ from $22/month to $40/month.
First, let's consider xAI's technical capabilities. Grok3 supposedly outperforms Gemini-2 Pro, DeepSeek V3, Claude 3.5 Sonnet, and GPT 4o. It seems like the top of the leaderboard shifts with every new release by every AI company.
Second, there's the "truth-seeking" mission. Musk calls it "maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically correct". Ask xAI for its opinion on The Information, and it goes on a tirade, calling "legacy media" garbage. It claims "X, on the other hand, is where you find raw, unfiltered news" and "X is the only place for real, trustworthy news". Gary Marcus has expressed concern that the richest man in the world has built a Large Language Model that spouts propaganda in his image.
Thinking Machines Lab
Mira Murati, who resigned as OpenAI CTO six months ago, has a new startup called Thinking Machines Lab. It will focus on interactions between humans and machines. The company aims to build multimodal systems that work with people collaboratively. With around 30 employees, two-thirds of whom are former OpenAI colleagues, as well as Meta and Mistral alums, Murati is focusing on AI alignment for safer, more reliable systems. The company plans to share code, datasets, and model specs to broaden research into AI alignment. Is this a move toward open-source, or a more open vision for AI development?
The New York Times Embraces AI
The New York Times is transitioning toward AI tools in the newsroom. These tools will assist with tasks like SEO headlines, editing, summarization, and product development. However, they are prohibited from drafting articles or generating images. The NYT utilizes internal and external tools, including GitHub Copilot, Google’s Vertex AI, NotebookLM, and OpenAI’s non-ChatGPT API. Internally, they use ECHO, an in-house AI summarization tool for articles, briefings, and interactive content. Notably, the NYT is still in the midst of a copyright lawsuit with OpenAI. The Financial Times, Vox Media, Axel Springer, and the Associated Press are also doing similar things.
Google's Internal AI Challenges
The Information has an article that looks at Google's internal challenges regarding AI development. Internal departments are clashing (Google Labs vs Workspace), and product development roadmaps overlap. DeepMind and Google Cloud have differing priorities, with DeepMind focused on speed of development and Cloud focused on reliability. Google's large structure makes moving fast more difficult compared to competitors. None of this is surprising, given Google's history.
Perplexity's Deep Research Tool
Perplexity has rolled out its own research product, Deep Research. It's free to use with registration, but limited to 5 queries per day; paid users get 500 per month. The tool takes a while to work through a query, performing searches, scanning resources, and providing a step-by-step process that users can follow in real-time, and delivers a downloadable PDF report.
The Four Eras of Computing
Computing has evolved through four stages:
Early Era (fully deterministic): Input, algorithm, and output were all done with fixed, predictable data.
Big Data/ML Era (deterministic input, probabilistic algorithm, deterministic output): Input and output are fixed, but the processing in between was aided by less specific processing like recommendation engines.
Current Era (Fully probabilistic): This includes GenAI, where all stages are "mushy," with no fixed information. This is more creative but can lead to plausible yet incorrect information.
Future (Probabilistic into deterministic output): Accepting the mushy, creative qualities, but gut-checking it at the end.
As the article states, "We’ve essentially made computers less predictable in exchange for greater capability – a shift that indeed makes them seem “more human” in their flexibility". In a time where the importance of facts is being deprioritized, is the probabilistic nature of AI a symptom, a cause, or completely coincidental?
The Value of University Education in the Age of AI
Hollis Robbins' Substack post questions the value of a university education, suggesting that AI systems can offer more than a traditional education. Robbins argues that "in the AGI era, the only defensible reason for universities to remain in operation is to offer students an opportunity to learn from faculty whose expertise surpasses current AI. Nothing else makes sense". She adds, "Students cannot be expected to continue paying for information transfer that AGI provides freely". How important is it for faculty to be "nose-deep in AI," as Hollis puts it?
That’s all, folks!