This year AI companies have decided to focus on healthcare, says the author.
Image: AI Lab
Last year, Artificial Intelligence (AI) became a buzzword where most companies claimed to have some sort of AI capability. Experts remained concerned about the speed of innovation while society was still trying to develop safeguards around AIs.
In 2026, I’ve noticed a new trend. AI companies have decided to focus on healthcare. Most of them used the World Economic Forum 2026 gathering as their platform to launch. The Gates Foundation and Open AI have announced the collaborative launch of Horizon 1000, a $50 million initiative designed to advance health care in Africa through AI.
Amazon through its One Medical division, launched an “agentic health AI assistant” that lives inside its primary care app where it can access members’ complete medical records. The assistant is designed to explain lab results, answer health questions, help decide what kind of care is appropriate and take concrete actions such as booking appointments or managing medication renewals.
Anthropic launched Claude for Healthcare. Anthropic has launched with an intention to target consumers and healthcare providers. Anthropic seeks to provide a connective layer across the fragmented landscape of modern health data. Patients can choose to link electronic health records, lab results and fitness data, with an emphasis on translation and preparation: summarising medical history, explaining a lab report.
While efforts to modernise healthcare should be welcomed, there’s also a need to be careful. To understand the need for vigilance you have to study what recently happened in Kenya. Kenyan President William Ruto was welcomed with warm hands by the American White House where a deal was concluded.
The deal involved a $2.5 billion, five-year health partnership that involves sharing, using, and managing Kenyan health data for disease surveillance.
The government argued that it was a "strategic" move for aid, yet it sparked an intense debate over data privacy. A Kenyan court later suspended the deal's implementation pending further review. The Kenyan court took a good decision, which needs to be understood by all African governments who are now key targets of AI implementation.
Data broadly remains a strategic resource for Africa in the AI race. The continent has been lagging behind due to limited AI skills. Data as well as rare minerals remain the only chance for Africa to compete in AI.
Health data is one of the most important resources that can be used to enable local AI companies to develop local solutions. If that data is handed over to a competitor or another country the risk is huge and it even involves security.
Kenyans have in the main indicated that their concern revolves around their national security and patient confidentiality.
As more AI companies turn their focus on healthcare, more governments will have to be vigilant and do more to protect national security and patient confidentiality. It will be a tall order for many as such offers will come indirectly through freemiums (free gifts) and donor funding. Citizens and governments will have to be careful about accepting free access to AI tools particularly for health.
I have no doubt that there are laws that safeguard violation of patient data. What is lacking, however, is the monitoring of what happens in real life. Countries will have to strengthen their monitoring systems of technology being absorbed from tech companies based elsewhere.
In the past, I have observed how social media companies have violated data privacy in the interest of profit. As we enter the AI age, I have no reason to believe that tech companies will act differently.
Wesley Diphoko is a Technology Analyst and Editor-in-Chief of Fast Company (South Africa) magazine.
Image: Supplied
Wesley Diphoko is a technology analyst and the Editor-In-Chief of FastCompany (SA) magazine.
*** The views expressed here do not necessarily represent those of Independent Media or IOL.
BUSINESS REPORT