CISA extends Mitre CVE contract at last moment | Computer Weekly

CISA extends Mitre CVE contract at last moment | Computer Weekly


In a last-minute intervention, the US Cybersecurity and Infrastructure Security Agency (CISA) has extended its contract for the Mitre-operated Common Vulnerabilities and Exposures (CVE) Programme, relied on by security professionals around the world to keep up to date on the latest publicly disclosed security vulnerabilities.

The future of the CVE Programme came into doubt earlier this week when a leaked letter from Mitre’s Yosry Barsoum warned that the contract pathway for the non-profit to run the programme was set to lapse within 24 hours.

Barsoum said that should a break in service occur, the programme would experience multiple impacts including “deterioration of national vulnerability databases and advisories, tool vendors, incident response operations, and all manner of critical infrastructure”.

The revelation caused consternation around the world, with security professionals bracing for massive change in the industry as a result of the removal of what Mitre describes as a “foundational pillar” for the sector.

Agreement to extend the contract under which Mitre oversees the vital CVE Programme was reached late on Tuesday 15 April, but news of this only began to trickle out on Wednesday morning.

A CISA spokesperson said: “The CVE Program is invaluable to the cyber community and a priority of CISA. Last night, CISA executed the option period on the contract to ensure there will be no lapse in critical CVE services. We appreciate our partners’ and stakeholders’ patience.”

CISA additionally confirmed that the contract extension will last for 11 months.

Computer Weekly reached out to Mitre for further comment but the organisation had not yet responded at press time.

The narrowly averted disruption comes at a difficult time for the cyber security community as it works flat out to ward off a vast array of threats from financially motivated and nation-state threat actors.

At the same time, the industry must reckon with the impact of massive cuts being made across the US government by Elon Musk’s Department of Government Efficiency (DOGE). These cuts are now hitting America’s state cyber security apparatus including at the Department of Homeland Security (DHS) and CISA itself, which sits within the DHS.

According to reports, it is likely that CISA may be looking at a reduction in its workforce of between a third and 90%, which would have a significant impact on the agency’s ability to protect US government bodies and critical infrastructure from cyber threats, and internationally, its ability to collaborate with partner agencies such as the UK’s National Cyber Security Centre (NCSC).

CISA is also facing a comprehensive review of its activities over the past six years, focusing on instances in which its conduct may have run contrary to the purposes and policies established in Executive Order 14149, signed by president Trump on 20 January and titled Restoring freedom of speech and ending federal censorship.

This review comes alongside a deeper probe into former CISA leader Chris Krebs, who last week saw his federal security clearance, and those of his current employer SentinelOne, revoked by Trump, to the consternation of many.

Krebs was fired from CISA at the end of 2020 after he disputed Trump’s narrative that the presidential election had been rigged in favour of Joe Biden. Krebs and CISA had maintained there was absolutely no evidence of any interference.



Source link

CVE Foundation pledges continuity after Mitre funding cut | Computer Weekly

CVE Foundation pledges continuity after Mitre funding cut | Computer Weekly


In the wake of the abrupt termination of the Mitre contract to run CVE Programme, a group of vulnerability experts and members of Mitre’s existing CVE Board have launched a new non-profit with the intention of safeguarding the programme’s future.

The CVE Foundation’s founders want to ensure the continuity, viability and stability of the 25-year-old CVE Programme, which up to today (April 16) has been operated as a US government-funded initiative, with oversight and management provided by Mitre under contract.

Even reckoning without the impact of Mitre’s loss of the CVE programme contract – which is one of a number of Mitre-held government contracts axed in recent weeks – and has already led to layoffs at the DC-area contractor – the CVE Board members say they already had longstanding concerns about the sustainability and neutrality of such a globally relied-upon resource being tied to a single government.

Their concerns became suddenly heightened after a letter from Mitre’s Yosry Barsoum warning that the CVE Programme was under threat circulated this week. “CVE, as a cornerstone of the global cyber security ecosystem, is too important to be vulnerable itself,” said Kent Landfield, an officer of the foundation.

“Cyber security professionals around the globe rely on CVE identifiers and data as part of their daily work – from security tools and advisories to threat intelligence and response. Without CVE, defenders are at a massive disadvantage against global cyber threats.”

The founders said that while they hoped today would never come, they have spent the past year working diligently in the background to create a strategy to transition the CVE system into a dedicated, independent non-profit.

Unlike Mitre – originally a computer research spin-out at MIT in Boston that now operates multiple R&D efforts – the CVE Foundation will be solely dedicated to delivering high-quality vulnerability identification, and maintaining the integrity and availability of the existing CVE Programme database on behalf of security professionals worldwide.

The foundation says its official launch marks a “major step toward eliminating a single point of failure in the vulnerability management ecosystems” and safeguarding the programme’s reputation as a trusted, community-driven resource.

“For the international cyber security community, this move represents an opportunity to establish governance that reflects the global nature of today’s threat landscape,” the founders said.

Community in shock

Although at the time of writing the CVE Programme remains up and running, with new commits made to its GitHub in the past hours, reaction to the contract’s cancellation has been swift and scathing.

“With 25 years of consistent public funding, the CVE framework is embedded into security programmes, vendor feeds, and risk assessment workflows,” said Tim Grieveson, CSO and executive vice-president at ThingsRecon, an attack surface discovery specialist. “Without it, we risk breaking the common language that keeps security teams aligned to identify and address vulnerabilities effectively.

“Delays in sharing vulnerability data would increase response times and give threat actors the upper hand,” he added. “With regulations like SEC, NIS2, and Dora demanding real-time risk visibility, a lack of understanding of risk exposure and any delayed response could seriously hinder the ability to react effectively.”

To maintain existing levels of resilience in the face of the shutdown, it’s important for security leaders to ensure organisations have a clear understanding of their attack surface and their suppliers, said Grieveson.

Added to this, collaboration and information sharing in the security community will become even more essential than it already is.

Chris Burton, head of professional services at Yorkshire-based penetration testing and security services provider Pentest People, said he hoped cooler heads would prevail.

“It’s completely understandable there are concerns about the government pulling funding for the Mitre CVE Programme; it’s a troubling development for the security industry,” he said.

“If the issue is purely financial, crowdfunding could offer a viable path forward, rallying public support for a project many believe in,” added Burton. “If it’s operational, there may be an opportunity for a dedicated community board to step in and lead.

“Either way, this isn’t the end, it’s a chance to rethink and reimagine. Let’s not panic just yet; there are still options on the table, as a global community. I think we should see how this unfolds.”

Next steps for security pros

At a more practical level, Grieveson shared some additional steps for security teams to take right now:

  • Map internal tooling dependencies on CVE feeds and APIs to know what breaks should the database go dark;
  • Identify alternative sources to maintain vulnerability intelligence, focusing on context, business impact and proximity to ensure comprehensive coverage of threats, whether they be current, emerging or historic;
  • Accelerate cross-industry intelligence sharing to proactively leverage tactics, tools and threat actor data.



Source link

Interview: Markus Schümmelfeder, CIO, Boehringer Ingelheim | Computer Weekly

Interview: Markus Schümmelfeder, CIO, Boehringer Ingelheim | Computer Weekly


Markus Schümmelfeder has spent more than a decade looking for ways to help biopharmaceutical giant Boehringer Ingelheim exploit digital and data. He joined the company in February 2014 as corporate vice-president in IT and became CIO in April 2018.

“It was a natural evolution,” he says. “Over time, you see what can be done as a CIO and have an ambition to make things happen. This job opportunity came around and it was when digitisation began. I saw many possibilities arising that were not there before.”

Schümmelfeder says the opportunity to become CIO was terrific timing: “It was a chance to bring technology into the company, to make more use of data, and evolve the IT organisation from being a service deliverer into a real enabler. My aim for all the years I’ve been with Boehringer is to integrate IT into the business community.”

Now, as the company’s 54,000 employees use more data than ever before across the value chain, including research, manufacturing, marketing and sales, Schuemmelfeder’s aim is being realised. He says professionals across the business understand technology is crucial to effective operational processes: “It’s about bringing us close together to make magic happen.”

Establishing the vision

Schümmelfeder says one of his key achievements since becoming CIO is leading the company on a data journey. His vision supported the company’s progress along this pathway.

“I went to the board and said, ‘This is what we should do, what we want to do, what makes sense, and what we perceive will be necessary for the future’,” he says. “We started that process roughly five years ago and everyone knows how important data is today.”

Making the transition to a data-enabled organisation is far from straightforward. Rather than being focused on creating reports, Schümmelfeder says his vision aimed to show people across the organisation how they could exploit information assets effectively. One of the key tenets for success has been standardisation.

“This is a fundamental force, and the team has done good work here,” he says. “10 years ago, we had between 4,500 and 5,000 systems across the organisation. Today, we have below 1,000. So, we reduced our footprint by 80%, which is a great accomplishment.”

Standardisation has allowed the IT team to deliver another part of Schümmelfeder’s vision – a platform-based approach to digitisation. Rather than investing in point solutions to solve specific business challenges, the platform approach uses cloud-based services to help people “jump start topics” as the business need arises.

The crucial technological foundation for this shift to standardisation has been the cloud, particularly Amazon Web Services (AWS), Microsoft Azure and a range of consolidated enterprise services, such as Red Hat OpenShift, Kubernetes, Atlassian Jira and Confluence, Databricks, and Snowflake. Schümmelfeder says the result is a flexible, scalable IT resource across all business activities. 

“You can create a cloud environment in minutes,” he says. “You can have an automated test environment that is directly attached and ready to use. You can create APIs immediately on the platform. We want people to deliver solutions at a faster pace, rather than creating individual solutions again and again.”

Building a platform

Boehringer recently announced the launch of its One Medicine Platform, powered by the Veeva Development Cloud. The unified platform combines data and processes, enabling Boehringer to streamline its product development. Schümmelfeder says the technology plays a crucial enabling role.

The One Medicine Platform is integrated with Boehringer’s data ecosystem, Dataland, which helps employees make data-driven decisions that boost organisational performance. Dataland has been running since 2022. The ecosystem collates data from across the company and makes it available securely for professionals to run simulations and data analyses.

“In the research and development space for medicine, there was nothing like a solid enterprise platform,” says Schümmelfeder, referring to his company’s relationship with Veeva. “We had about 50, maybe even more, tools that were often not interconnected. If you wanted to replicate data from one service to another, you’d have to download the data, copy and paste, and so on. That approach is tedious.”

The One Medicine Platform allows Boehringer to connect data across functions, optimise trial efficiency around its research sites, and accelerate the delivery of new medicines to treat currently incurable diseases. Schümmelfeder says the Veeva technology gives the business the edge it requires.

“We saw we were slower than our competitors in executing clinical trials. We thought we could be much better. We wanted to look for a new way of executing clinical trials, and we needed to discuss our processes and potentially redefine and change them based on the platform approach,” he says. “We chose Veeva because it was the most capable technology to help us deliver the spirit of a platform. It’s also an evolving technology with good future potential.”

Embracing data innovation

Schümmelfeder says the data platform he’s pioneered is helping Boehringer explore emerging technologies. One key element is Apollo, a specialist approach to artificial intelligence (AI), allowing employees to select from 40 large language models (LLMs) to explore their use cases and exploit data safely.

He says this large number of LLMs allows Boehringer employees to select the best model for a specific use case. Alongside mainstream models like Google Gemini and Open AI’s ChatGPT, the company uses niche models dedicated to research that can deliver more appropriate answers than general models.

Schümmelfeder says Boehringer does not develop models internally. He says the rapid pace of AI development makes it more sensible to dedicate IT resources to other areas. The company’s staff can use approved models and tools to undertake data-led research in several key areas: “We have a toolbox staff can dip into when they realise an idea or use case.”

He outlines three specific AI-enabled use cases: Genomic Lens generates new insights that enable scientists to discover new disease mechanisms in human DNA; the company uses algorithms and historical data to identify the right populations for clinical trials quickly and effectively; and Smart Process Development, which applies machine learning and genetic algorithms to create productivity boosts in biopharmaceutical processes.

“My aim for all the years I’ve been with Boehringer is to integrate IT into the business community”

Markus Schümmelfeder, Boehringer Ingelheim

Another key area of research and development is assessing the potential power of quantum computing. Schümmelfeder suggests Boehringer has one of the strongest quantum teams in Europe. He recognises that other digital and business leaders might feel the company’s commitment is ahead of the adoption curve.

“And I would say, ‘Yes, you’re right’, but then you need to understand how this technology works. We are helping to make breakthroughs, to bring code to the industry and to discover how we will use quantum. So, we have a strong team that brings a lot to the table to help this area evolve,” he says.

“I’m convinced quantum computing will be a huge gamechanger for the pharma industry once the technology can be used and set into operations. That situation is why I believe you have to be involved in quantum early to understand how it works. You need to bring knowledge into the organisation and be part of making quantum work.”

While Schümmelfeder acknowledges Boehringer isn’t pursuing true quantum research yet, the company has built relationships with other technology specialists, such as Google Research. He says these developments are the foundations for future success in key areas, such as understanding product toxicity: “It’s relatively early, but you can see the investment. I hope we can see the first real use cases by the end of this decade.”

Creating an impact

Schümmelfeder considers the type of data-enabled organisation he’d like to create during the next few years and suggests the good news is that the technological foundations for further transformation are now in place.

“We don’t need a technology revolution, I think we’ve done that,” he says. “We’ve done our homework, and we’ve standardised and harmonised. The next stage is not about more standardisation, it’s more about looking specifically at where we need to be successful. That focus is on research and development, medicine, our end-customers and how to improve the lives of patients and animals. That work is at the core of what we want to do.”

With the technology systems and services in place, Schümmelfeder says he’ll concentrate on ensuring the right culture exists to exploit digitisation. That focus will require a concerted effort to evolve the skills across the organisation. The aim here will be to ensure many people in all parts of the business have the right capabilities.

“When you talk about data, you don’t need 10 people able to do things, you need thousands of people who can execute,” he says. “You need to bring this knowledge to the business. That means business and IT must integrate deeply to make things happen. The IT team has to go to the business community and ask big questions like, ‘What do you need? Tell me the one thing that can make you truly successful?’”

Schümmelfeder says that finding the answers to these questions shouldn’t be straightforward. Sometimes, he expects the search to be uncomfortable. IT can’t sit back – the company’s 2,000 technology professionals must drive the identification of digital solutions to business problems. Line-of-business professionals must also feel comfortable and confident using emerging technologies and data.

He says the company’s Data X Academy plays a crucial role. Boehringer worked with Capgemini to develop this in-house data science training academy. Data X Academy has already trained 4,000 people across IT and the business. Schümmelfeder hopes this number will reach 15,000 people during the next 24 months and allow data-savvy people across the organisation to work together to develop solutions to intractable challenges.

“We want to ask the right questions on the business side and create lighthouse use cases in IT that show people what we can do,” he says. “We can drive change together with the business and create an impact for the organisation, our customers and patients.”



Source link

AI chip restrictions limit Nvidia H20 China exports | Computer Weekly

AI chip restrictions limit Nvidia H20 China exports | Computer Weekly


Nvidia is expecting a big hit to its business as reports emerge of the White House imposing export restrictions on its H20 GPU (graphics processor unit) in China. The new restrictions appear to have been carried over from the previous administration’s policy to restrict access to AI chips and advanced AI models.

The Framework for Artificial Intelligence Diffusion from the US Bureau of Industry and Security (BIS), which was published 15 January 15 2025, puts in place export administration regulation controls on advanced semiconductors. By imposing controls that allow exports, re-exports and transfers (in-country) of large quantities of advanced computing integrated circuits only to certain destinations and end users, BIS said the export controls could reduce the risk that malicious state and non-state actors gain access to advanced AI models.

At the time, Ned Finkle, vice-president for government affairs at Nvidia, posted a damning indictment of the former US administration’s attempts to curb semiconductor exports. In a blog post, he said: “In its last days in office, the Biden Administration seeks to undermine America’s leadership with a 200+ page regulatory morass, drafted in secret and without proper legislative review. This sweeping overreach would impose bureaucratic control over how America’s leading semiconductors, computers, systems and even software are designed and marketed globally.”

Finkle described the Biden Administration’s approach as “attempting to rig market outcomes and stifle competition”, adding: “The Biden Administration’s new rule threatens to squander America’s hard-won technological advantage”, and said they “would do nothing to enhance US security”.

The new rules were set to come into effect on April 15., and appears that the Trump administration is not rescinding on the restrictions the Biden Administration had put in place. According to a news story on BBC, Nvidia has now said that the Trump administration has informed it that a licence will be required to export the H20 chip to China.

In the transcript of the company’s Q4 2025 earnings call, posted on Motley Fool, Nvidia chief financial officer Colette Kress noted that, as a percentage of its total datacentre revenue, datacentre sales in China remained well below levels seen on the onset of export controls. “Absent of any change in regulations, we believe that China shipments will remain roughly at the current percentage. The market in China for datacentre solutions remains very competitive,” she said.

While the company said it would continue to comply with export controls while serving its customers, its share price took a hit as a result of controls coming into effect.

The H20 is a less powerful Nvidia AI accelerator, designed for the Chinese market. According to Antonia Hmaidi, senior analyst at Mercator Institute for China Studies, Nvidia sold a million H20s to Chinese customers in 2024. While the Financial Times recently reported that Chinese rival, Huawei, has been ramping up production of its home-grown AI offering, the Ascend chip, Hmaidi noted that in 2024, it only shipped 200,000 units, which “reveals structural issues in China’s semiconductor industry”.

Hmaidi also noted that Huawei’s software lags behind Nvidia’s, with developers in China reluctant to adopt the chip for training most models.

The export changes affecting the H20 come just a day after the Trump administration announced Nvidia was leading what it described as an “American-made chips boom”.

Nvidia said that within the next four years, it plans to produce up to half a trillion dollars of AI infrastructure in the US through partnerships with TSMC, Foxconn, Wistron, Amkor and SPIL. 

The company said it has started production of its Blackwell chips at TSMC’s chip plants in Phoenix, Arizona. It is also building supercomputer manufacturing plants in Texas, with Foxconn in Houston and with Wistron in Dallas. According to Nvidia, production at both plants is expected to ramp up in the next 12 to 15 months.



Source link

Microsoft remains committed to AI in France | Computer Weekly

Microsoft remains committed to AI in France | Computer Weekly


With a large ecosystem of partners in France in both the public and private sectors, Microsoft already has a big stake in the country. But last May, the company announced it will be upping the ante with an investment of €4bn to accelerate the adoption of artificial intelligence (AI) and cloud technologies. 

The company said that much of the money will go towards developing a datacentre using the latest generation of technology and in training citizens on AI. Both improved infrastructure and enhanced AI skills figure prominently in France’s National Strategy for AI and the recommendations of the French Commission for Artificial Intelligence, which aim to position France as a leader in both development and use of AI. 

In addition to building a new datacentre near Mulhouse, Microsoft will use some of the funding to expand its datacentre capacity in Paris and Marseilles. The company announced in May 2024 that it plans to have a total of 25,000 GPUs available for AI workloads by the end of 2025. The expanded datacentre capacity should provide a boost across the economy as AI and cloud are being used in all industries in France.  

In her keynote at the event in March, Corine De Bilbao, president of Microsoft France, said that if AI is applied the right way, it can double France’s economic growth between now and 2030. Not only will AI enable faster innovation, but it will also help organisations in the country face the talent shortage and reinvent manufacturing processes. Infrastructure alone is not enough – a skilled population and a healthy ecosystem are also needed. This is why, according to De Bilbao, Microsoft will train one million French people by 2027 and will help 2,500 startups during the same timeframe. 

The recommendations of the French Artificial Intelligence Commission include training in different forms, such as holding ongoing public debates on the economic and societal impacts of AI, adding AI to higher education programmes in many areas of study, and training people on specific AI tools. Microsoft intends to help in these areas and train office workers, so they know how to prompt AI tools to get the best results, and so they understand what happens with their data and how it’s processed. The company will also train developers and make sure companies of all sizes have the skills they need to use Microsoft’s latest tools. 

Microsoft is already involved in the startup community – for example, it’s one of the partners of  Station F, which claims to be the world’s largest startup campus. A thousand startups are hosted in Station F, which offers more than 30 programmes to help entrepreneurs.

Philippe Limantour, CTO of Microsoft France, told Computer Weekly: “We have a dedicated programme in Station F called Microsoft GenAI Studio that supports select startups. And we help startups with our technology and by providing training.” 

AI comes with a new set of security threats. But it also delivers some new tools that can be used to protect organisations and individuals. According to Vasu Jakkal, corporate vice-president of Microsoft Security, business and technology leaders are particularly concerned with leakage of sensitive data, and indirect prompt injection attacks. Jakkal said in her keynote that all datacentres will be protected with new measures to counter attacks specific to AI – attacks on prompts and models, for example. 

Jakkal also spoke about how GenAI can be used to boost cyber security. For example, Microsoft Security Copilot, which was launched last year, helps not only to detect security incidents and respond to them, but also to find the source.

She said during her keynote that Microsoft detected more than 30 billion phishing emails target customers between January and December 2024, a volume of attacks that far surpasses what teams can handle manually. She said a brand new set of phishing triage agents in Microsoft Security Copilot can now handle some of the work to free teams to focus on more complex cyber threats and take proactive measures. 

Scientific research and engineering were also big topics of conversation during the event with Antoine Petit, CEO of the French National Centre for Scientific Research (CNRS), saying during a panel discussion that CNRS opened a group called AI for Science and Science for AI. Petit said that the centre recognises the importance not only of conducting more research in AI but also in applying AI to help scientists in other research. But he said the technology is still in its infancy so nobody knows exactly how it will affect science.  

Alain Bécoulet, deputy director general of ITER, who was on the same panel, said that scientific organisations need to free researchers from some of the more mundane tasks so they can play their role as creators. AI may offer a way of providing the information that is both necessary and sufficient, so that researchers and engineers can fulfil their roles.  

A topic that permeated all discussions at the event was the ethical use of AI in France. Limantour told Computer Weekly that Microsoft has been focused on responsible AI for a long time. This is not only for reasons of compliance, but it’s also because the company thinks responsible use of AI is the best way to get value out of the technology. “The future is bright for people who are trained to use AI safely,” Limantour said.



Source link