AI & Dev Digest: NeoCognition's Human-like Agents, OpenAI's Smarter Images, and the Mythos of AI Security
Today's AI digest covers NeoCognition's $40M seed round for human-like AI agents, OpenAI's new image generator with web-pulling capabilities, and the ongoing debate between Sam Altman and Anthropic over the cybersecurity model Mythos, which Mozilla has already used to fix hundreds of Firefox bugs. Plus, a look at an amusing AI writing trope.
The world of AI and software development continues its rapid evolution, bringing us closer to more intelligent agents, powerful creative tools, and robust security. Today's digest highlights significant funding for human-like AI, advancements in AI-driven image generation, and a fascinating, ongoing debate in the cybersecurity AI space, punctuated by real-world impact.
TL;DR
- NeoCognition secured $40 million in seed funding to develop AI agents that learn like humans.
- OpenAI's ChatGPT Images 2.0 now boasts web search capabilities and can generate up to eight images simultaneously from a single prompt.
- Sam Altman criticized Anthropic's Mythos cybersecurity model, labeling its marketing as 'fear-based.'
- Mozilla utilized Anthropic's Mythos Preview to identify and rectify 271 vulnerabilities in Firefox 150.
- A peculiar sentence construction has become a strong indicator of AI-generated corporate writing.
AI research lab NeoCognition lands $40M seed to build agents that learn like humans

In a significant move demonstrating investor confidence in advanced AI, NeoCognition, an AI research lab, has emerged from stealth with $40 million in seed funding. Co-led by Cambium Capital and Walden Catalyst Ventures, with participation from Vista Equity Partners and notable angel investors including Intel CEO Lip-Bu Tan and Databricks co-founder Ion Stoica, this funding round aims to support the development of self-learning AI agents.
The startup was spun out of research by Ohio State professor Yu Su, who initially hesitated to commercialize his work. Su decided to take the leap last year after realizing that recent advancements in foundational models could enable truly personalized AI agents. He emphasizes that current AI agents are often 'generalists' and lack the reliability needed for complex tasks, a gap NeoCognition aims to fill by creating agents that learn and adapt with human-like proficiency.
Today's agents are generalists. Every time you ask them to do a task, you take a leap of faith.
OpenAI’s updated image generator can now pull information from the web

OpenAI has rolled out a substantial update to its AI-powered image generator, ChatGPT Images 2.0, introducing new 'thinking capabilities' that allow it to search the web for information. This enhancement enables the model to create more 'sophisticated' images, significantly improving its ability to follow instructions, preserve specific details, and generate accurate text within images.
Powered by OpenAI’s new GPT Image 2 model, these advanced features are available to ChatGPT Plus, Pro, Business, and Enterprise subscribers. With thinking enabled, the image generator can now create up to eight images simultaneously, maintaining consistent characters, objects, and styles across scenes. This makes it ideal for generating sequences like manga pages, social media graphics, or comprehensive design plans for multiple rooms. Additionally, ChatGPT Images 2.0 offers higher resolutions up to 2K and various aspect ratios, alongside marked improvements in generating text in non-Latin scripts such as Japanese, Korean, Chinese, Hindi, and Bengali.
The update allows ChatGPT Images 2.0 to create a series of images based on one prompt.
Sam Altman throws shade at Anthropic’s cyber model, Mythos: ‘fear-based marketing’

The rivalry between leading AI developers OpenAI and Anthropic continues to intensify, with OpenAI CEO Sam Altman publicly criticizing Anthropic’s new cybersecurity model, Mythos. During an appearance on the podcast Core Memory, Altman dismissed Anthropic’s claims about Mythos’s power and potential dangers, suggesting that the company is employing 'fear-based marketing' to inflate its product's perceived importance.
Anthropic had previously announced Mythos earlier this month, releasing it exclusively to a select group of enterprise customers. The company justified this limited release by stating concerns that the model was too powerful for public distribution, fearing it could be weaponized by cybercriminals. However, critics, including Altman, argue that this rhetoric may be exaggerated, questioning whether the restrictions are truly for public protection or to boost Anthropic's market positioning.
OpenAI CEO Sam Altman called out his competitor’s new cybersecurity model, noting that the company was using fear to make its product sound more impressive than it actually is.
Mozilla Used Anthropic’s Mythos to Find and Fix 271 Bugs in Firefox

Despite the ongoing debate sparked by Sam Altman's comments, Anthropic's Mythos Preview has already demonstrated tangible benefits in the realm of cybersecurity. Mozilla announced today that its latest browser release, Firefox 150, includes protections for 271 vulnerabilities that were identified and fixed using early access to Mythos Preview. This real-world application highlights the immediate impact AI tools can have on improving software security.
Bobby Holley, Firefox's chief technology officer, noted that adapting to the volume of bugs uncovered by new AI tools required significant resources and discipline. However, he stressed the necessity of this effort to ensure user security, given that such powerful capabilities are expected to soon be in the hands of attackers. Holley believes these tools have 'changed things dramatically,' as automated techniques can now cover a comprehensive range of vulnerability-inducing bugs, a task previously requiring extensive manual analysis or less effective automated methods like software fuzzing.
Our belief is that the tools have changed things dramatically, because now we have automated techniques that can cover, as far as we can tell, the full space of vulnerability-inducing bugs.
It’s not just one thing — it’s another thing

In an amusing observation about the evolving landscape of AI-generated content, a particular sentence construction — 'It’s not just this — it’s that' — has become an undeniable signature of synthetic writing. This phrasing has proliferated to such an extent that its appearance in a text is now almost a definitive indicator of AI authorship, rather than just a mere clue.
A recent report by Barron’s specifically highlighted the dramatic increase of this sentence structure in corporate communications. The report didn't merely note its presence; it delved into market intelligence firm AlphaSense's database, meticulously scanning corporate news releases, earnings reports, and government filings. The findings suggest that this peculiar phrasing is not just a minor stylistic quirk, but an 'epidemic' within corporate documents, having quadrupled in usage.
This sentence construction (“It’s not just this — it’s that”) has become so common in AI-generated writing that now, it’s no longer just a clue that a piece of writing may be synthetic — it’s almost a guarantee.