Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

AI’s dumb genius problem | Computer Weekly

By Computer Weekly by By Computer Weekly
April 13, 2026
Home Uncategorized
Share on FacebookShare on Twitter


The AI debate right now centres almost entirely on models – which LLM is smarter, whether they’ll be commoditised, whether OpenAI or Anthropic or Google wins the arms race. These are real questions. But they’re not the most important ones. The most important question is what sits between the model and the outcome. And right now, that layer barely exists.

Call it the context engine.

Here’s the problem with a genius in a room. Sam Altman and Dario Amodei have both used some version of this analogy – imagine having a hundred brilliant minds working on your hardest problems. It’s a compelling image. But a genius without context is just a smart person operating in a vacuum. Hand them a legal brief with no background on the client, the jurisdiction, the negotiating history, the personalities involved – and their output is generic at best. The intelligence is real. The usefulness is limited.

What changes everything isn’t adding more geniuses. It’s the briefing before they walk into the room.

That briefing – the situational awareness, the organisational memory, the understanding of how a specific user or company operates in the world – is what a context engine provides. And it’s almost entirely missing from how most people are using AI today. We are essentially handing brilliant minds a task with no background and wondering why the outputs feel impressive but imprecise.

Lessons from Google’s history

Think about how Google evolved. In the early days, the metric everyone tracked was index size – how many websites had Google crawled. More pages meant better search. That was the commodity race, and Google won it. But analysts eventually realised, that did not give Google a long-term sustainable advantage. That came from the fact that Google knew you. It understood what you were actually looking for in the context of everything else you’d ever searched for. The index was replicable. The user relationship wasn’t.

We are in the index phase of AI right now. Everyone is measuring parameters, benchmarks, reasoning scores. These matter. But they are not where the lasting value will accumulate. The context layer is.

Consider what context unlocks in practice. A law firm’s AI doesn’t just need to know the law – it needs to know this client’s risk tolerance, this partner’s drafting style, twenty years of case history, and how the opposing firm tends to negotiate. A software team’s AI doesn’t just need to write clean code – it needs to understand the architecture decisions made three years ago, the technical debt the team has chosen to live with, and what “done” means in this organisation. The raw intelligence of the underlying model matters far less than whether it knows where it is.

Here’s why this is also a business story. LLMs, for all their impressiveness, are ultimately replicable. Given enough capital and talent, you can train a competitive model. That’s not a dismissal of what OpenAI, Anthropic, and Google have built – it’s an observation about the nature of the asset. The race between them is real, and the outcome matters. But it’s a race.

Why context matters in AI

Context is different. Context requires users and organisations to actively choose to share information – their workflows, their history, their preferences, their institutional knowledge. That act of sharing creates switching costs. Once an organisation’s context lives inside a system, leaving that system means starting over. The context doesn’t transfer. That’s an advantage that compounds over time in a way that model performance alone does not.

This is also why organisational context is more valuable than individual context. An individual user can rebuild their relationship with a new tool relatively quickly. An organisation cannot. The switching cost is institutional – it lives across teams, processes, and years of accumulated data. Whoever captures that first, and earns the trust required to hold it, is sitting on something that looks less like software and more like infrastructure.

The LLM debate will continue. It’s not unimportant. But the next phase of AI value creation won’t be won by whoever builds the smartest model in isolation. It will be won by whoever figures out how to make these models truly situationally aware – equipped not just with what they’ve learned, but with where they are, who they’re serving, and what actually matters in this specific moment.

The context engine is coming. The question is who builds it, and who owns what it learns.

Judah Taub is the founder and managing partner of Hetz Ventures, an Israeli early-stage venture capital firm specialising in cybersecurity, data, and AI infrastructure.



Source link

By Computer Weekly

By Computer Weekly

Next Post
Stocks making the biggest moves premarket: Goldman Sachs, Revolution Medicines, Fastenal & more

Stocks making the biggest moves premarket: Goldman Sachs, Revolution Medicines, Fastenal & more

Recommended.

AI shapes cloud spend amid adoption efforts

AI shapes cloud spend amid adoption efforts

October 21, 2025
Fortinet: ’Critical’ FortiClient EMS Vulnerability Exploited In Attacks

Fortinet: ’Critical’ FortiClient EMS Vulnerability Exploited In Attacks

April 6, 2026

Trending.

Ghost Campaign Uses 7 npm Packages to Steal Crypto Wallets and Credentials

Ghost Campaign Uses 7 npm Packages to Steal Crypto Wallets and Credentials

March 24, 2026
How Ceros Gives Security Teams Visibility and Control in Claude Code

How Ceros Gives Security Teams Visibility and Control in Claude Code

March 19, 2026
Supermicro onthult DCBBS® met nieuwe NVIDIA Vera Rubin NVL72-, HGX Rubin NVL8- en Vera CPU-systemen, ontworpen om de marktintroductietijd van klanten te versnellen

Supermicro onthult DCBBS® met nieuwe NVIDIA Vera Rubin NVL72-, HGX Rubin NVL8- en Vera CPU-systemen, ontworpen om de marktintroductietijd van klanten te versnellen

March 18, 2026
Microsoft Details Cookie-Controlled PHP Web Shells Persisting via Cron on Linux Servers

Microsoft Details Cookie-Controlled PHP Web Shells Persisting via Cron on Linux Servers

April 3, 2026
Openreach Taps Google Cloud AI to Accelerate High-Speed Internet Access and Cut Carbon

Openreach Taps Google Cloud AI to Accelerate High-Speed Internet Access and Cut Carbon

March 25, 2026

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio