Par son communiqué du 8 avril 2026 (1), la DINUM a officiellement annoncé sa sortie de Windows pour adopter Linux sur ses postes de travail. Cette décision s’inscrit dans une stratégie globale de réduction des dépendances numériques extra-européennes, pilotée par le Premier ministre, le ministre de l’Action et des Comptes publics, et la ministre déléguée chargée de l’Intelligence artificielle et du Numérique. L’objectif est de renforcer la souveraineté numérique française et européenne, notamment face aux tensions géopolitiques et à la fin du support de Windows 10 en octobre 2025.
A priori, même limitée pour un premier temps à la DINUM, cette décision d’abandonner Windows/Microsoft est à un stade très précoce. Les solutions techniques ne sont pas tout à fait définies, ni finies, ni abouties, ni validées.
Les adhérences (liens technologiques existants) avec des programmes externes ou non FOSS du ou des Systèmes d’informations de l’État ne sont pas définies, encore moins les migrations, et encore moins un calendrier ou un budget de l’ensemble.
David Amiel, ministre de l’Action et des Comptes publics : "[…] La transition est en marche : nos ministères, nos opérateurs et nos partenaires industriels s’engagent aujourd’hui dans une démarche sans précédent pour cartographier nos dépendances […]". (1)
C’est peut-être pour cela que dans un esprit volontariste ou inspiré du mode Agile, la DINUM sert de banc d’essai, de prototype.
En outre, les solutions alternatives au niveau européen à la bureautique Microsoft et à son éco-système applicatif ne sont pas uniques, et
par ailleurs dans des états divers d’avancement ou d’aboutissement, et parfois en conflit juridique, comme l’épisode OnlyOffice contre Euro-Office rapporté sur ce site. À ce sujet, le projet OpenBuro est apparu en 2026 (voir ci-dessous).
La DINUM coordonnera un plan interministériel de réduction des dépendances extra-européennes. Chaque ministère (opérateurs inclus) sera tenu de formaliser son propre plan d’ici l’automne, portant sur les axes suivants : poste de travail, outils collaboratifs, anti-virus, intelligence artificielle, bases de données, virtualisation, équipements réseau. Ces plans d’action permettront de donner de la visibilité quant aux besoins de l’Etat à la filière industrielle du numérique, qui dispose d’atouts majeurs qu’il convient de valoriser par la commande publique. (1)
Bref, ce n’est pas le début effectif d’une migration globale, mais peut-être le commencement de la suite du début des préliminaires (ex 2022: Le poste de travail Linux : un objectif gouvernemental ?), avec une déclaration d’intention et un premier objectif de migration des postes de travail de la Dinum, et d’un début d’organisation et mise en marche. Dans un contexte budgétaire contraint, le FOSS souverain sera-t-il un avantage, ou bien une victime de la prochaine campagne présidentielle qui enterrera le sujet?
Il dit explicitement que des plans restent à faire d’ici l’automne (2026; j’espère ;-) ) pour réduire les dépendances extra-européennes. (1)
C’est une formulation peu précise, peu datée, peu budgétée pour le moins.
Nous verrons bien l’évolution effective, pour moi, les acteurs+décideurs de la migration de la Gendarmerie il y a presque 20 ans (voir par exemple l’article LinuFr Le poste de travail du gendarme sous GNU/Linux Ubuntu ) resteront mes références héroïques de l’OpenSource en milieu professionnel de l’État, plus que l’épisode apparemment avorté des députés sous Linux en 2007.
Je suivrai ceci avec pas mal d’espoir, le citoyen en moi accompagne ce projet de ses voeux de réussite, avec ses objectifs globaux, entre autres plus d’indépendance dans notre parcours politique, favoriser la liberté de l’information, la sécurité informatique, et peut-être des économies récurrentes de dépenses publiques en licences et prestations, et enfin une plus grande diffusion du FOSS dans toute la France. À ce propos, j’aimerais bien que l’État français finance fortement le FOSS, au moins sur les logiciels que l’ANSSI recommande dans son Socle Interministériel des logiciels libres.
Le communiqué de la DINUM rappelle le lancement en 2026 de OpenBuro et Open-Interop. OpenBuro se veut être le standard européen ouvert qui veut concurrencer Microsoft 365 et Google Workspace, par l’orchestration entre les applications. OpenBuro a été lancé au FOSDEM 2026 à Bruxelles par la DINUM et LINAGORA (Twake Workplace). Il doit être un standard ouvert qui unifie les applications open source ou pas en une véritable plateforme, une alternative à Microsoft 365, sans verrou, sans remplacement brutal de l’existant.
OpenInterop est une brique logicielle open source d’interopérabilité de la Software for Health Foundation, qui est une organisation à but non lucratif qui promeut des logiciels open source pour la santé, avec un accent particulier sur les pays à revenu faible ou intermédiaire et sur le renforcement des compétences locales. Elle se présente aussi comme une structure qui veut rendre les solutions numériques de santé durables, abordables et indépendantes de fournisseurs tiers.
La direction interministérielle du Numérique (DINUM) est une direction de l’administration publique française. Service du Premier ministre, elle est placée sous l’autorité du ministre de la Transformation et de la Fonction publiques. Elle a pour mission d’élaborer la stratégie numérique de l’État et de piloter sa mise en œuvre. Elle est considérée comme la direction des systèmes d’information de l’État.
La direction des achats de l’État (DAE) est une direction des ministères économiques et financiers. Elle définit et met en œuvre la politique des achats de l’État, à l’exception des achats de défense et de sécurité. Le pilotage des marchés interministériels, le conseil auprès des ministères et la professionnalisation des acheteurs sont quelques-unes de ses autres missions.
La direction générale des Entreprises (DGE) est une direction de l’administration publique française, rattachée au ministère de l’Économie et des Finances. Elle conçoit et met en œuvre les politiques publiques qui concourent au développement des entreprises.
L’agence nationale de la sécurité des systèmes d’information (ANSSI) est un service rattaché au secrétariat général de la Défense et de la Sécurité nationale, autorité chargée d’assister le Premier ministre dans l’exercice de ses responsabilités en matière de défense et de sécurité nationale, et a charge de la sécurité des systèmes d’information nationaux. L’ANSSI est l’autorité nationale en matière de cybersécurité et de cyberdéfense en France, ses missions sont : défendre, connaître, partager, accompagner, réguler.
Malheureusement personne ne représente directement les citoyens dans cette démarche.
La DINUM, qui compte environ 250 postes, sera la première à migrer vers Linux. Chaque ministère, ainsi que ses opérateurs, doit formaliser un plan de réduction des dépendances extra-européennes d’ici l’automne 2026. La migration concerne aussi les outils collaboratifs, avec le déploiement de la Suite Numérique (Tchap, Visio, messagerie souveraine, stockage de fichiers, etc.), déjà testée par 40 000 agents.
Cette migration est présentée comme un chantier d’ampleur inédite, avec des obstacles techniques et organisationnels importants. La DINUM coordonnera un plan interministériel, en collaboration avec l’ANSSI, la DGE et la DAE, pour identifier les dépendances et définir des solutions souveraines. Des rencontres industrielles du numérique sont prévues en juin 2026 pour concrétiser des alliances public-privé autour de la souveraineté européenne.
La DINUM a choisi la distribution NixOS pour équiper ses 250 postes, car c’est une distribution qui est distribuable par scripts (ce qui fait que chaque poste est identique). En référence au monde d’Astérix le Gaulois, les éléments sont nommés Sécurix et Bureautix.
Le système d’exploitation pour la DINUM, Securix sur Github, c’est donc un NixOS modifié pour supprimer l’authentification par mot de passe classique au profit de clés matérielles FIDO2, en suivant les recommandations relatives à l’administration sécurisée des SI de l'ANSSI – l’Agence nationale de la sécurité des systèmes d’information.
Dans des exemples de démonstration de la configuration de Bureautix sur Github, la liste des logiciels préinstallés, on compte trois suites Office: LibreOffice, OnlyOffice et WPS Office. Sans doute pour pouvoir disposer de plusieurs solutions pour ouvrir des documents de la suite Microsoft Office 365, qui est toujours une opération délicate et parfois décevante.
Espérons qu’un beau jour, si Securix et Bureautix sont adoptés et généralisés, il faudra envisager de les exfiltrer des serveurs Github sous la coupe du grand Satya de Microsoft s’ils y sont développés vers une forge souveraine comme au Pays Bas ou en Allemagne.
Télécharger ce contenu au format EPUBCommentaires : voir le flux Atom ouvrir dans le navigateur
Plus que quatre semaines avant la conférence annuelle de la communauté open source OW2, les 2 et 3 juin 2026, à Paris-Châtillon !

Pour cette édition, l’association met l’accent sur la souveraineté numérique européenne. Dans un contexte où l’Union européenne renforce son autonomie en matière de technologies, de ressources et de services numériques sécurisés, l’open source et les modèles ouverts apparaissent comme des leviers essentiels de l’indépendance technologique. À travers une trentaine de conférences de haut niveau, OW2con explorera le rôle stratégique de ces approches dans la construction d’un écosystème numérique souverain.
Les temps forts de la conférence incluent :
L’ensemble de la conférence a lieu en anglais. L’agenda inclut divers moments d’échange, et réseautage lors des pauses, de la cérémonie des « OW2 best project awards », et d’un cocktail en fin de première journée.
Grâce au soutien des sponsors, l’accès à la conférence est gratuit, mais l’inscription est obligatoire. Si vous deviez annuler votre présence merci de nous prévenir.
Télécharger ce contenu au format EPUBCommentaires : voir le flux Atom ouvrir dans le navigateur
Tous les ans depuis 2011, Code Lutin apporte un soutien financier à des initiatives promouvant les valeurs du Libre. Longtemps appelé « Mécénat Code Lutin », nous avons décidé de nous allier à l’initiative Copie Publique afin de changer le monde ensemble.
Parmi les bénéficiaires des années précédentes, nous pouvons citer Panoramax, Lemmy, HackInScience, PeerTube, YunoHost, Interhop… et tellement d’autres ! Vous trouverez la liste complète sur le site de copie publique.
Comment ça se passe chez Code Lutin ?
Cette année encore, nous avons décidé d’ouvrir les candidatures au public. Si vous avez un projet ou une organisation dont l’objet correspond aux thèmes listés ci-dessous, n’hésitez pas à postuler.
Et fidèle à nos valeurs, nous fonctionnons démocratiquement en désignant les bénéficiaires par un vote selon le principe « 1 Personne = 1 Voix » auquel tous les salariés peuvent participer.
Dites m’en plus !
Pour postuler, c’est simple et rapide. Il vous suffit de remplir le formulaire disponible au lien suivant https://framaforms.org/appel-a-projets-copie-publique-2026-de-code-lutin-1772533371
Vous avez jusqu’au 17 mai prochain minuit pour postuler ! N’hésitez plus.
Télécharger ce contenu au format EPUBCommentaires : voir le flux Atom ouvrir dans le navigateur
Cette revue de presse sur Internet fait partie du travail de veille mené par l’April dans le cadre de son action de défense et de promotion du logiciel libre. Les positions exposées dans les articles sont celles de leurs auteurs et ne rejoignent pas forcément celles de l’April.
✍ Pierric Marissal, le jeudi 30 avril 2026.
Pour sortir de notre dépendance au numérique états-unien, remplacer un Google par un équivalent européen est voué à l’échec. Il faut penser d’autres modèles non prédateurs, à l’image du logiciel libre, nous explique Magali Garnero, présidente de l’April (Association pour la promotion du logiciel libre) et membre de Framasoft.
Et aussi:
✍ Laurent Tessier, le lundi 27 avril 2026.
Si les outils des Gafam sont très présents sur les ordinateurs des élèves et de leurs professeurs, des solutions libres sont développées dans l’éducation nationale. Quelques exemples.
✍ Thierry Noisette, le lundi 27 avril 2026.
Sept associations, dont l’April et Que Choisir Ensemble, ont manifesté par des funérailles symboliques devant le siège de Microsoft France. Elles dénoncent le gaspillage forcé par le passage à Windows 11, OS pour lequel de nombreux ordinateurs ne sont pas compatibles.
Et aussi:
Commentaires : voir le flux Atom ouvrir dans le navigateur
Comme vous le savez sans doute, cette année est marquée par les 35 ans de GNUstep, qui est à la fois un cadre logiciel qui permet de développer en objective-C des applications portables sur Windows, MacOS et GNU/Linux, mais aussi un environnement d’exécution (runtime) de ces mêmes applications.
Plusieurs projets de bureau compatibles avec GNUstep existent depuis quelques années: après les défunts Simply-GNUstep et Étoilé, citons les actifs GSDE développé par Ondrej Florian ou encore le plus ambitieux NEXTSPACE de Sergii Stoïan, qui tend à reproduire fidèlement l’ergonomie d’OPENSTEP sur BSD ou GNU/Linux. Plus récemment, dans un style plus proche de MacOS, citons également les prometteurs bureaux Gershwin (pour Xorg) ou Ambrosia (pour Wayland) développé par James Carthew.
Le bureau Agnostep propose sa version BETA 2.0.0, dans un style plus classique, avec des menus verticaux à la NeXT, combinant Window Maker et GWorkspace, ainsi que le runtime classique de GNUstep.
Il n’en propose pas moins un thème moderne inspiré par le jeu d’icônes du projet Papirus. Bien que fondé sur une distribution Debian Lite, il ne fournit pas de paquets, mais un principe d’installation proche des ports BSD. Un assistant facilite l’installation initiale comme l’ajout d’applications supplémentaire afin de fournir les versions les plus récentes des applications de la communauté GNUstep, compilées depuis les sources. En effet, contrairement à d’autres projets qui divergent parfois tellement des sources originales, qu’il devient impossible de les reverser dans le lot commun, la philosophie d’Agnostep est d’échanger patiemment avec la communauté des développeurs afin que les problèmes constatés et les améliorations bénéficient à tout le monde.
De plus, ayant résolu certains problèmes de la version précédente, il présente une meilleure stabilité. Outre les applications notoires de l’écho-système GNUstep, comme GNUMail, SimpleAgenda, etc., il offre également une nouvelle collection d’applications GNUstep originales créées dans ce but afin de proposer une expérience utilisateur plus cohérente:
media où sont montés les disques amovibles : un compagnon de wmudmount et de udisks2, en se dispensant d’afficher le bureau de GWorkspace.À partir de cette version, les manuels d’aide (format .help) seront fournis avec chaque application concernée grâce aux améliorations récentes de l’application HelpViewer. Autre exemple qui illustre les fructueux échanges avec la communauté.

Agnostep est initialement développé sur un Raspberry Pi 500, mais son code permet de l’installer sur n’importe quel ordinateur susceptible d’accueillir la distribution GNU/Linux Debian : d’où son nom. Agnostep est un téléscopage de agnostique et GNustep.
Télécharger ce contenu au format EPUBCommentaires : voir le flux Atom ouvrir dans le navigateur
It seems like Ubuntu cannot catch a break.
Their entire web infrastructure was under continued DDoS attack for 5 days. Which seemed to be over now. But the misery is not.
A few hours ago, there was a (now deleted) tweet from Ubuntu's official Twitter account. It announced the availability of Ubuntu's newest AI agent.
At first glance, it looked legit until you dug deeper.

The tweet looks legit, right? At least it plays with the human psyche.
It talks about AI, which relates to Ubuntu's recent AI move. This could trick many people who might believe that this is a legit next step in the AI direction.
It was mentioned to be built on Solana and the account was also tagged. Solana is a legit open-source blockchain platform for digital transactions and decentralized applications (read crypto payments).
This is why the next line mentions buzzwords like Blockchain and decentralized. Blockchain also relates to crypto so this was more like a build up for crypto that would come later.
The so-called agent is called Numbat and the main image shows the Numbat animal with orange as its primary color. "Numbat" is also part of Ubuntu 24.04 codename Noble Numbat.
And then the displayed URL is ai-ubuntu.com which is similar to ai.ubuntu.com although ai subdomain doesn't exist on Ubuntu but it is enough to trick unsuspecting people.
Mind that it was not a single tweet; it was a thread (a series of nested tweets) and the replies were closed. So even if someone discovered the scam, they wouldn't have been able to alert others in the replies.
So, fake AI branding, Ubuntu's Numbat name, Solana tags, blockchain buzzwords, and a near-identical URL to quietly build false trust and thus guiding unsuspecting users step by step into a crypto scam before they realize the deception.
The next step of deception came when the link was clicked.
Like most of the briefly compromised accounts, this tweet also tried to lure people into a crypto scam. It was not evident immediately unless you clicked on the given URL. And boy that URL looks like a typical Canonical webpage.

It is not impossible to get fooled by the clever webpage if you are not paying attention. Your guards would have been down because you clicked a link shared by official Ubuntu account.

The rest of the page had links to actual Ubuntu project and thus making it look even more legit.
It was only when you clicked the "Check eligibility" or "Explore Ubuntu AI" buttons, the deception was evident. It asked you to add your crypto wallet.

Why would you do that? Because just before the buttons, there is a text that says:
Early ecosystem participants may qualify for future $UM allocations. Snapshot approaching.
This compromised tweet just adds to the pile of misery Canonical had been suffering of late and it didn't happen in isolation.
In case you didn't know, Ubuntu was suffering from a large scale DDoS attack. Ubuntu's websites went down for about five days last week but they seem to be back now.
Starting April 30, Canonical's web services faced what the company described as a "sustained, cross-border" attack. The ubuntu.com website, Snap store, Launchpad, and several other Canonical-owned services went offline or became unreliable.
The attack lasted until around May 5, when services were gradually restored. At the time of writing this, Canonical's official status page shows everything fully operational. Let's hope it stays that way.
Note that DDoS attacks make a website unavailable by flooding the server with traffic. It didn't compromise the servers. So, your Ubuntu installation, package updates (APT repositories are mirrored across the world and kept working), ISO downloads, and the Ubuntu operating system itself was not impacted. Your system was never at risk. Although, if you had trouble running snap install commands or pulling from a PPA last week, you now know why.
Canonical has not released a detailed post-incident report yet. A Pro-Iran hacker group called 313 reportedly claimed responsibility, but this has not been confirmed by Canonical.
The hacker group 313 has announced that they have ended the DDoS attacks. They have not mentioned anything about compromised tweet.

Now, ai-ubuntu.com was registered with a Hong Kong based registrar, but that doesn't mean the attackers were based in Hong Kong.

One thing to note here is that many organizations as well as individual accounts often use third-party tools to manage and schedule their tweets. It is also possible that the compromise came from such a third-party Twitter tool. This could also be a human slip up and their social media manager's account might have compromised.
It is really up to Canonical to investigate and find out the root cause. We can only make guesses.
The Google Home Mini launched in 2017 as Google's smallest, cheapest smart speaker. Millions were sold, handed out, and given away as promotional gifts.
Many of them still work, but it being in the last phase of its lifecycle means that while it still functions for basic tasks, it doesn't have any kind of customizability or local processing capabilities.
The hardware was fine for the time but has become less relevant in Google's lineup over time, with the Nest Mini, its successor, also discontinued. And more recently, there's been talk of new Gemini-powered smart speakers.
But what if you could bring your Home Mini (1st Generation) device up to 2026 standards with local processing by paying only $85?
Tired of AI fluff and misinformation in your Google feed? Get real, trusted Linux content. Add It’s FOSS as your preferred source and see our reliable Linux and open-source stories highlighted in your Discover feed and search results.
Add us as preferred source on GoogleTwo chips do the heavy lifting on this board. You get an Espressif ESP32-S3 as the main processor, paired with an XMOS XU316 chip dedicated entirely to audio. The Espressif unit brings 8 MB of PSRAM and 16 MB of flash to the table, while the XMOS one carries 4 MB of its own.
The ESP32-S3 covers Wi-Fi, Bluetooth, and wake word detection via microWakeWord, with none of the voice data leaving your device. Audio cleanup falls to the XU316, which runs through two on-board microphones to scrub out noise and echo before anything gets processed.
And the Home Mini's original speaker still works, which can be plugged back in via the included FPC cable.
For software, ESPHome is already preinstalled, ready to work with Home Assistant's Assist, Music Assistant, and Snapcast. A cloud LLM can also be dropped in as the conversation agent if you want one, but the whole thing runs fine without it.

Plus, the mute button on the device makes a physical disconnection at the hardware level, following what the original Home Mini did. You will likewise find four SK6812 RGB LEDs (for reference) sitting in the same positions, acting as status indicators.
Here are the full specs for you to go through:
At $85, the MiciMike board is available on Crowd Supply, with orders estimated to ship around October 1, 2026. US shipping is free, but international buyers must pay an additional $12.
The company behind it is the Ireland-based MiciMike ReV Devices, led by Imre László, who has put up the schematics, PCB design files, and the Bill of Materials on GitHub. The boards themselves are manufactured by Elecrow, a Shenzhen-based outfit behind a range of DIY and maker-focused hardware that we have covered a fair bit.
Before you go, know that there are plans for a drop-in replacement PCB for the Nest Mini. You can read about it on the official website.
MiciMike Home Mini PCB (Crowd Supply)👉 Related project you can explore: AsteriodOS is giving new life to old smartwatches.
Before we dive into the topic at hand, you should know that Euro-Office is a new European productivity project by Nextcloud and IONOS, which was forked from ONLYOFFICE.
It is a self-hosted, web-based office suite built for organizations and governments that want collaborative document editing on their own infrastructure. A big part of it is to move away from an office suite with ties to Russia, which has triggered concerns over digital sovereignty.
Following that, The Document Foundation (TDF), the nonprofit behind LibreOffice, had put forward a question, asking what document format this suite would use as its native format.
They have received no reply and have put out a thank-you post to ODF contributors while taking a dig at Euro-Office's silence.
Toward the end of March, TDF published an open letter to European citizens arguing that digital sovereignty is not as simple as switching office software vendors. Real sovereignty, TDF said, requires open document formats, open fonts, and continuity of expertise, none of which come automatically with a vendor switch.
Then came the issue of OOXML versus ODF. OOXML, the format used by Microsoft Office, is designed and controlled entirely by Microsoft. Any office suite that defaults to OOXML compatibility is still structurally dependent on decisions made in the U.S., regardless of where it is hosted.
ODF, the Open Document Format, is what TDF wants Euro-Office to commit to instead. It is an ISO standard, developed openly without a single company controlling it.
They also noted that Euro-Office's launch press release made no mention of ODF as a native format and asked publicly whether it would be the default for documents created and shared between European public bodies.
Euro-Office's GitHub does list ODF formats alongside DOCX, PPTX, and XLSX, so it's not like they've excluded open formats entirely. But their FAQ frames the whole thing around "great MS compatibility," which is a problem.
Supporting a format and making it your native default are two different things. The distinction is relevant for any European institution that actually wants to break the dependency on Microsoft rather than just move it to a different server rack.
Whether Euro-Office addresses this directly or keeps quiet, TDF's question is now out there. And given that Germany has already mandated ODF by law, it's not a question that's going away anytime soon.
A logic flaw that sat quietly in the Linux kernel since 2017 has finally been found and disclosed. For a brief window, it let any unprivileged local user on a Linux system escalate to root with a script smaller than most config files.
The flaw is in a kernel subsystem that lets regular programs tap into built-in cryptographic functions. By feeding it file data in a specific way, an attacker can get the kernel to quietly overwrite 4 bytes of any file's in-memory copy.
The actual file on disk stays intact the whole time, so any tool checking file integrity will see nothing wrong. The exploit is just a 732-byte Python script that doesn't require any additional dependencies or compilation.
The vulnerability is tracked as CVE-2026-31431, goes by the name "Copy Fail," and was discovered by researchers at Theori using their AI security research tool, Xint Code.
The security researchers tested it on Ubuntu 24.04 LTS, Amazon Linux 2023, RHEL 10.1, and SUSE 16, getting root on all four with the exact same script each time.
They had reported the issue to the Linux kernel security team on March 23, received acknowledgment the next day, and had a patch proposed and reviewed by March 25. The fix was committed to mainline on April 1, with the CVE assigned on April 22, and public disclosure following on April 29 (linked earlier).

According to the Copy Fail website hosted by Theori, the risk level varies quite a bit depending on how you run Linux.
At the top are multi-tenant Linux hosts, Kubernetes and container clusters, CI runners and build farms, and cloud SaaS environments running user-supplied code.
These all get a "High" risk rating. Containers and cloud workloads are especially exposed because the Linux page cache, the part of memory this exploit corrupts, is shared across the entire host, container boundaries included.
A compromised container can take down the whole node, and a bad pull request run on a shared CI runner could hand an attacker root on that machine.
Standard Linux servers where only the team running it has shell access get a "Medium" rating, whereas personal desktops and laptops are at the bottom with a "Lower" risk rating.
Copy Fail needs local code execution to work, so it won't get anyone in remotely by itself. If malware is already running on your machine, this could be used to escalate to root, but that's a bigger problem either way.
To fix this, patching the kernel is the way. Most major distros have updates out or on the way. If patching isn't immediately possible, Theori recommends blacklisting the algif_aead kernel module as a stopgap:
echo "install algif_aead /bin/false" > /etc/modprobe.d/disable-algif-aead.conf
rmmod algif_aead 2>/dev/nullAs of writing, Microsoft has noted that exploitation remained "limited and primarily observed in proof-of-concept testing," so there's no confirmed mass-scale campaign just yet.
That said, CISA, the US cybersecurity agency, has added Copy Fail to its Known Exploited Vulnerabilities (KEV) catalog, ordering US federal agencies to patch their Linux systems by May 15.
It also urged other organizations to treat it as a priority regardless of whether the federal deadline applies to them.
Suggested Read 📖: VS Code Was Adding Copilot as a Git Co-Author Without Telling Anyone
Back in November 2025, Jan Vlug, a software engineer who writes for the Dutch government's developer portal, put out a detailed blog recommending which Git forge the Netherlands should adopt for its governmental source code hosting needs.
His post came at a time when the Ministry of the Interior (BZK) was already setting up a dedicated Git instance, and the platform decision was still open.
Currently, the Dutch government's code is spread across GitHub and GitLab, neither of which is under government oversight.
GitHub got ruled out first because it's proprietary software, which directly conflicts with the government's own policy of preferring open source when options are equally suitable.
GitLab made it further in the evaluation but didn't survive it either. The issue was its open-core model, where the Community Edition is genuinely free software but the Enterprise Edition is not.

Forgejo came out on top due to its fully free and open source nature. Licensed under GPLv3+ and governed by Codeberg e.V., a democratic nonprofit, it has no enterprise tier, proprietary upsell, or vendor lock-in problems.
On April 24, 2026, code.overheid.nl had its soft launch, with developer advocate Tom Ootes writing about it on developer.overheid.nl. He framed it as a collective project to build something together rather than ship something finished.
The platform is a self-hosted Forgejo instance, running on Dutch government infrastructure managed by SSC-ICT (DAWO). It's free for all government organizations and is built around the following goals.
Open source development with proper Git tooling, including pull requests, issue tracking, and code reviews; government-wide collaboration to reduce duplicate development across agencies; and sovereignty through full control over the hosting environment.
As mentioned earlier, this initiative is still in the pilot phase, with the rollout being kept deliberately gradual.
Not every government organization can sign up yet, and the idea is to build it alongside the developers who will actually use it, with early participants encouraged to file issues and open pull requests on the platform itself.


I had to translate the repos page to see what was in there.
The platform is live and already hosts some content. The most notable presence is Kiesraad, the Dutch Electoral Council, which has pushed several election-related repositories including Abacus, the software used for vote counting and seat distribution, and e-KS, an electronic candidate nomination system.
The Ministry of the Interior (BZK) has the DAWO project (their digital autonomous workplace initiative) on there, along with a DigiD source code release published under a freedom of information ruling.
On the organization side, the list of who has joined since the April 24 soft launch is telling. Multiple national ministries are already on the platform: Finance, Foreign Affairs, Agriculture, and Interior.
Several major municipalities have also signed up, including The Hague, Utrecht, Leiden, and Arnhem. For a platform still in pilot with no formal launch announcement, that's a fairly significant roster.
Suggested Read 📖: A Mobile Dev Hackathon is Coming to the Netherlands
Le 6 mai 2026 à 22:09, Bob Weinand bobwei9@hotmail.com a écrit :
Volker and I drafted a RFC:
https://wiki.php.net/rfc/scope-functions
Please consider it and share your feedback.
I hope it will alleviate pain around some of the most common forms of Closure usage which is "execute this now as part of the called function", which currently can require a lot of "use ($variables)".
For me the primary use case of use ($capturing) was always "I need this function later and want to explicitly document what escapes my function". This, however, required this straightforward usage of Closures to also document every single usage of a variable. Which is really not that beneficial at all.
Thus the scope functions as proposed will be able to fill that gap in future.
Thank you,
Bob
Hi,
This is nice. As I understand it, this RFC could resolve problems that the Context Managers RFC tries to resolve in a simpler and more flexible way. (And it resolves other problems too, of course.)
Taking the first example from the Context Manager RFC:
using (file_for_write('file.txt') => $fp) {
foreach ($someThing as $value) {
fwrite($fp, serialize($value));
}
}
// implementable as:
function file_for_write(string $filename): ContextManager {
return new class($filename) implements ContextManager {
function __construct(private readonly string $filename) { }
private $fp;
function enterContext() {
$this->fp = @fopen($this->filename, 'w');
if (!$this->fp) {
throw new \RuntimeException('Couldn’t open file');
}
return $this->fp;
}
function exitContext(?\Throwable $e = null): ?\Throwable {
@fclose($this->fp);
return $e;
}
};
}
This can be rewritten as:
file_for_write('file.txt', fn($fp) {
foreach ($someThing as $value) {
fwrite($fp, serialize($value));
}
});
// implementable as (which is simpler: one function instead of a whole class):
function file_for_write (string $filename, callable $do_write): void {
$fp = @fopen($filename, 'w');
if (!$fp) {
throw new \RuntimeException('Couldn’t open file');
}
try {
$do_write($fp);
} finally {
@fclose($fp);
}
}
For those of us that abhor exceptions in case of recoverable failure, there is even more. With this RFC, one can easily return true/false (or whatever other signal) for success/failure, while Context Manager strongly leans towards the use of exceptions (although, of course, it remains possible to assign the outcome to a variable and to exit the context with break or goto):
$ok = file_for_write('file.txt', fn($fp) {
foreach ($someThing as $value) {
if (something_is_wrong_with($value))
return false;
fwrite($fp, serialize($value));
}
return true;
});
// implementable as (which is more flexible: exceptions are not the only type of signal):
#[\NoDiscard]
function file_for_write (string $filename, callable $do_write): bool {
$fp = @fopen($filename 'w');
if (!$fp) {
return false;
}
try {
return $do_write($fp);
} finally {
@fclose($fp);
}
}
—Claude
On 4 May 2026 21:24:39 BST, Daniel Scherzer daniel.e.scherzer@gmail.com
wrote:Hi internals,
I'd like to start the discussion for a new RFC about adding a new method,
ReflectionAttribute::getCurrent(), to access the current reflection target
of an attribute."a new static method, ReflectionAttribute::getCurrent(), that, when called
from an attribute constructor, returns a reflection object corresponding to
what the attribute was applied to."This sounds like an arbitrary new rule for just this functionality. I
don't think we should have special rules for a single static method call.I believe it's useful to have something like this, but I'm not in favour
of this approach.Would it not be possible for this to be a normal (dynamic) method on the
ReflectionAttrbute object?cheers
Derick
In order to reduce the scope of the weird new method, I have updated the
RFC to split it up:
These are expected to be used together in the constructors of attributes,
e.g. ReflectionAttribute::getCurrent()->getReflectionTarget(), but the
normal getReflectionTarget() method is also useful and usable elsewhere.
-Daniel
Hey Larry,
Am 19.01.2026 um 16:58 schrieb Larry Garfield larry@garfieldtech.com:
As noted in Future Scope, we can add function-based context managers as well based on generators. At the moment we're not convinced it's necessary, but it's a straightforward add-on if we find that always writing a class for a context manager is too cumbersome.
The issue with punting this behavior to user-space is that a library cannot provide this sort of functionality in a clean way.
In an ideal world, if we had auto-capturing long-closures, then I would agree this is largely unnecessary and could instead be implemented like so (to reuse the examples from the RFC):
$conn->inTransaction(function () {
// SQL stuff.
});$locker->lock('file.txt', function () {
// File stuff.
});$scope->inScope(function () {
$scope->spawn(yadda yadda);
});$errorHandlerScope->run(fn() => null, function () {
// Do stuff here with no error handling.
});And so forth. If we had auto-capturing closures, I would probably argue that is a better approach.
However, auto-capturing closures have been rejected several times, and I have no confidence that we will ever get them. (Whether you approve or disapprove of that is your personal opinion.) The current alternative involves using lots of
useclauses, which is needlessly clunky to the point that folks try to avoid it.I literally have code like this in a project right now, and I've had to do this many times:
public function parseFolder(PhysicalPath $physicalPath, LogicalPath $logicalPath, array $mounts): bool
{
return $this->cache->inTransaction(function() use ($physicalPath, $logicalPath, $mounts) {
// Lots of SQL updates here.
});
}That's just gross. :-) This is exactly the example that's been used in the past to argue in favor of auto-capturing closures, but it's never been successful.
I fully agree that this is gross. I have just created a comprehensive RFC https://wiki.php.net/rfc/scope-functions to address this underlying problem you describe.
It does address quite a few of the main issues people had with trivial auto-capturing Closures which would simply clone the symbol table.
I personally really don't like this Context Managers RFC given the apparent complexity it has (only for heavyweight usages basically, library style - you wouldn't just create ContextManager implementing classes ad hoc for everything).
Thus, I'd like to ask you to consider my RFC first and give feedback on it, and possibly - obviously only if you think my RFC is a good choice for the language - pause this RFC for as long as mine is under discussion.
Thanks,
Bob
Volker and I drafted a RFC:
https://wiki.php.net/rfc/scope-functions
Please consider it and share your feedback.
I hope it will alleviate pain around some of the most common forms of Closure usage which is "execute this now as part of the called function", which currently can require a lot of "use ($variables)".
For me the primary use case of use ($capturing) was always "I need this function later and want to explicitly document what escapes my function". This, however, required this straightforward usage of Closures to also document every single usage of a variable. Which is really not that beneficial at all.
Thus the scope functions as proposed will be able to fill that gap in future.
Thank you,
Bob
Le 22 avril 2026 20:28:15 GMT+02:00, Larry Garfield
larry@garfieldtech.com a écrit :I will stop here, however, and ask for input from the audience. (Not just the regulars in this thread of late, but all of you reading this.) Including if you have an alternate approach to the three listed above that would have notably fewer cons.
--Larry Garfield
I prefer the void return and throw if needed approach, it looks way
more understandable. I was confused by that part when reading the RFC
and really surprised that returning an Throwable on success is ignored,
which is not clear at all when reading the interface.
By which you mean the "if you do nothing, the exception is swallowed" approach? (IE, more work in the common case.)
My reluctance there is that it will become really easy to forget to propagate.
public function exitContext(?Throwable $e) {
fclose($this->fp);
}
That seems like it should be all you need, but it will also silently swallow any errors, so whatever code uses this context manager won't know if it was successful or not. That seems not-great to me.
The in-out parameter works too but is a bit weirder, and makes it
unclear what happens if exitContext throws.It's also unclear to me in the current desugarized version what happens
when exitContext throws, the reset of the context var does not happen ?
There is nothing to handle that.Côme
We'll have to clean up the desugared versions once we decide what they should actually be. :-) There's probably a bug in there at the moment.
--Larry Garfield
In PHP, the native clone keyword performs a shallow copy: nested objects
remain shared with the original instance. Deep cloning recursively clones
the full object graph so the clone shares no references with the original.
Deep cloning in PHP has traditionally relied on unserialize(serialize($value)).
Although effective, this approach is slow and memory-intensive because it breaks
copy-on-write (COW) semantics by rebuilding the entire value graph from a
serialized representation.
Symfony 8.1 introduces a new DeepCloner class in the VarExporter component
that deep-clones PHP values while preserving COW for strings and arrays. Instead
of serializing data, it reconstructs the object graph directly, making cloning
significantly faster and more memory efficient.
For a one-off deep clone, use the static deepClone() method:
use Symfony\Component\VarExporter\DeepCloner;
$clone = DeepCloner::deepClone($originalObject);
To clone the same prototype repeatedly, create a DeepCloner instance once.
The object graph is analyzed upfront, making subsequent clone() calls much cheaper:
$cloner = new DeepCloner($prototype);
$clone1 = $cloner->clone();
$clone2 = $cloner->clone();
You can also clone the root object into a compatible class with cloneAs():
$childDefinition = (new DeepCloner($definition))
->cloneAs(ChildDefinition::class);
DeepCloner instances can also be exported to arrays and restored later,
making them suitable for caching or transport across processes (json_encode(),
MessagePack, APCu, OPcache-warmed .php files, etc.). The payload is typically
30-40% smaller than serialize($value):
$payload = (new DeepCloner($graph))->toArray();
$json = json_encode($payload);
// ... store, cache or send the payload ...
$clone = DeepCloner::fromArray(json_decode($json, true))->clone();
Finally, the lower-level Hydrator and Instantiator classes are
deprecated in 8.1 in favor of the single deepclone_hydrate() function which
instantiates and hydrates an object (including private, protected and readonly
properties) in a single call:
// Before (deprecated in 8.1):
$user = Instantiator::instantiate(User::class);
Hydrator::hydrate($user, ['name' => 'Alice']);
// After:
$user = deepclone_hydrate(User::class, ['name' => 'Alice']);
In benchmarks, DeepCloner consistently outperforms unserialize(serialize()):
it is 4x faster for typical object graphs (100 objects with a few properties each)
and up to 15x faster for graphs with many properties (50 objects with 20
properties each), while also using significantly less memory.
That's why DeepCloner is not a niche addition for VarExporter users.
Symfony 8.1 now uses DeepCloner internally in several core components:
ArrayAdapter implementation.As a result, Symfony applications automatically benefit from faster container compilation, lower memory usage, and more efficient in-memory caching.
ext-deepclone PHP ExtensionAlongside DeepCloner, the Symfony team has released a new PHP extension,
symfony/php-ext-deepclone. It provides native implementations of the
deepclone_to_array(), deepclone_from_array() and deepclone_hydrate()
functions.
When the extension is installed, DeepCloner transparently uses it instead
of the userland polyfill, providing even better performance without requiring
any application changes.
Symfony 8.1.0-BETA1 has just been released.
This is a pre-release version of Symfony 8.1. If you want to test it in your own applications before its final release, run the following commands:
$ composer config minimum-stability beta
$ composer config extra.symfony.require "8.1.*"
$ composer update
These commands assume that all your Symfony dependencies in composer.json
use * as their version constraint. Otherwise, you will need to update
the version constraints of those Symfony dependencies to 8.1.*.
Read the Symfony upgrade guide to learn more about upgrading Symfony and use the SymfonyInsight upgrade reports to detect the code you will need to change in your project.
Tip
Want to be notified whenever a new Symfony release is published? Or when a version is not maintained anymore? Or only when a security issue is fixed? Consider subscribing to the Symfony Roadmap Notifications.
Symfony 6.4.38 has just been released.
Read the Symfony upgrade guide to learn more about upgrading Symfony and use the SymfonyInsight upgrade reports to detect the code you will need to change in your project.
Tip
Want to be notified whenever a new Symfony release is published? Or when a version is not maintained anymore? Or only when a security issue is fixed? Consider subscribing to the Symfony Roadmap Notifications.
Symfony 8.0.10 has just been released.
Read the Symfony upgrade guide to learn more about upgrading Symfony and use the SymfonyInsight upgrade reports to detect the code you will need to change in your project.
Tip
Want to be notified whenever a new Symfony release is published? Or when a version is not maintained anymore? Or only when a security issue is fixed? Consider subscribing to the Symfony Roadmap Notifications.
Symfony 7.4.10 has just been released.
Read the Symfony upgrade guide to learn more about upgrading Symfony and use the SymfonyInsight upgrade reports to detect the code you will need to change in your project.
Tip
Want to be notified whenever a new Symfony release is published? Or when a version is not maintained anymore? Or only when a security issue is fixed? Consider subscribing to the Symfony Roadmap Notifications.